Works pretty good. There is noticeable desync when one device is plugged into speakers though (at least on windows). It'd be great if you had some way to get a specific device's audio lag or at least let us edit that aspect.
whimsy 18 hours ago [-]
This is very, very cool; it's a thing I've been looking for on my backburner for several years. It's a very interesting problem.
There are a ton of directions I can think about you taking it in.
The household application: this one is already pretty directly applicable. Have a bunch of wireless speakers and you should be able to make it sound really good from anywhere, yes? You would probably want support for static configurations, and there's a good chance each client isn't going to be able to run the full suite, but the server can probably still figure out what to send to each client based on timing data.
Relatedly, it would be nice to have a sense of "facing" for the point on the virtual grid and adjust 5.1 channels accordingly, automatically (especially left/right). [Oh, maybe this is already implicit in the grid - "up" is "forward"?]
The party application: this would be a cool trick that would take a lot more work. What if each device could locate itself in actual space automatically and figure out its sync accordingly as it moved? This might not be possible purely with software - especially with just the browser's access to sensors related to high-accuracy location based on, for example, wi-fi sources. However, it would be utterly magical to be able to install an app, join a host, and let your phone join a mob of other phones as individual speakers in everyone's pockets at a party and have positional audio "just work." The "wow" factor would be off the charts.
On a related note, it could be interesting to add a "jukebox" front-end - some way for clients to submit and negotiate tracks for the play queue.
Another idea - account for copper and optical cabling. The latency issue isn't restricted to the clocks that you can see. Adjusting audio timing for long audio cable runs matters a lot in large areas (say, a stadium or performance hall) but it can still matter in house-sized settings, too, depending on how speakers are wired. For a laptop speaker, there's no practical offset between the clock's time and the time as which sound plays, but if the audio output is connected to a cable run, it would be nice - and probably not very hard - to add some static timing offset for the physical layer associated with a particular output (or even channel). It might even be worth it to be able to calculate it for the user. (This speaker is 300 feet away from its output through X meters of copper; figure out my additional latency offset for me.)
camtarn 17 hours ago [-]
> This speaker is 300 feet away from its output through X meters of copper; figure out my additional latency offset for me.
0.3 microseconds. The period of a wave at 20kHz (very roughly the highest pitch we can hear) is 50 microseconds. So - more or less insignificant.
Cable latency is basically never an issue for audio. Latency due to speed of sound in air is what you see techs at stadiums and performance halls tuning.
superjan 8 hours ago [-]
For those wondering: The rule thumb here is that light travels at one foot per nanosecond. 300 ns =0,3 μsec. Electricity is a bit slower but the same order of magnitude.
KayEss 40 minutes ago [-]
And by a happy coincidence it turns out that audio does about one foot in one millisecond making light six orders of magnitude faster
freemanjiang 17 hours ago [-]
Thank you for the kind words! Yeah, I think it gets a lot more complicated once you start dealing with speaker hardware. It pretty much only works for the device's native speaker at the moment.
The instant you start having wireless speakers (eg. bluetooth) or any sort of significant delay between commanding playback and the actual sound coming out, the latency becomes audible.
raisedbyninjas 14 hours ago [-]
For devices with mics, can you have them play a test chirp to measure the latency of Bluetooth or other laggy sound stack?
hn8726 4 hours ago [-]
Bluetooth audio devices that I use tend to change the protocol as soon as it switches to headset mode (with microphone enabled), which works terribly for music. I imagine the protocol used when the microphone is enabled might have completely different latency characteristics than the one used purely for audio, so a chirp might be measuring completely different thing
sokka_h2otribe 1 hours ago [-]
You could use a different device in the swarm for measurement, but yeah it seems pretty quickly complicated! I have no idea as well how stable the latency is
hgomersall 18 hours ago [-]
Silent disco in which everyone brings their own source and headphones.
cypherpunks01 8 hours ago [-]
Absolutely! Silent disco still requires impractically expensive rental hardware to work well as far as I know. A lot of them run off FM radio, since it's the simplest way to go, but nobody owns portable radios anymore.
An OSS app with the ability to sync everyone up over mobile or wifi, on Android or iOS with BYO headphones, would be incredible. This should be a thing :)
nsteel 17 minutes ago [-]
Snapcast has a webapp and a native android client. Although I'm not sure how well it handles many, many clients. In theory, if all on the same WiFi they should all play in sync like a silent disco (at least for those not using Bluetooth headphones where the playback latency is too high/not available).
vladvasiliu 5 hours ago [-]
I wonder if something like this (without the OSS part) doesn't already exist. Some cinemas in France have some kind of app for people who are either hearing or visually impaired which allows them to follow the movie.
I've never seen in action and don't know how it works, but at least for the audio part it should be able to synchronize the phone with the cinema screen.
Roku sticks allow this for t.v watching, via the Roku app. No idea how well it works for audio or more latency sensitive applications.
pcthrowaway 8 hours ago [-]
I believe the syncing won't work when playing with a bluetooth device
throawayonthe 2 hours ago [-]
[dead]
freemanjiang 18 hours ago [-]
I primarily built this for group in-person listening, and that's what the spatial audio controls are for. But what is interesting is that since it only requires the browser, it works across the internet as well. You can guarantee that you and someone else are listening to the same thing even across an ocean.
Someone brought up the idea of an internet radio, which I thought was cool. If you could see a list of all the rooms people are in and tune it to exactly what they're jamming to.
Ne02ptzero 17 hours ago [-]
> You can guarantee that you and someone else are listening to the same thing even across an ocean.
How can you guarantee that? NTP fails to guarantee that all clocks are synced inside a datacenter, let alone across an ocean (Did not read the code yet)
EDIT: The wording got me. "Guarantee" & "Perfect" in the post title, and "Millisecond-accurate synchronization" in the README. Cool project!
moomin 17 hours ago [-]
More, the speed of light puts a hard cap on how simultaneous you can be. Wolfram Alpha reckons New York to London is 19ms in a vacuum, more using fibre.
Going off on a tangent: Back in the days of Live Aid, they tried doing a transatlantic duet. Turns out it’s literally physically impossible because if A songs when they hear B, then B hears A at least 38ms too late, which is too much for the human body to handle and still make music.
recursive 14 hours ago [-]
It's a less hard problem than the duet. If the round-trip is 38ms, you can estimate that the one-way latency is 19ms. You tell the the other client to play the audio now, and you schedule it for 19ms in the future.
That's assuming standard OS and hardware and drivers can manage latency with that degree of precision, which I have serious doubts about.
In a duet, your partner needs to hear you now and you need to hear them now. With pre-recorded audio, you can buffer into the future.
moomin 9 hours ago [-]
You’re right that it’s an easier problem, but it’s still trickier than it looks. Remember the point of this is to be listening together. To do that, you need to be able to communicate your reactions. And then you’re back to the 38ms (in practice it’s probably twice that). Either way, at 120bpm that’s over a bar!
If you _don’t_ have real time communication, then you don’t really need to solve this problem. But the problem is fundamentally unsolvable because the speed of light (in a vacuum) is the speed of causality and, as I say, puts a hard cap on simultaneity. This tends to be regarded as obvious at interstellar distances but it affects us at transatlantic distances too.
freemanjiang 5 hours ago [-]
Haha yeah guarantee is a strong word. I just mean that it’s good enough to not be noticeable (even within the same physical room)
h2zizzle 34 minutes ago [-]
Glad to see that one scene from Rainbows End/the thing I had to tell every Best Buy customer was impossible FINALLY become a reality. P3 All-Out Attack time. Kudos.
thruflo 16 hours ago [-]
This looks really cool, congrats!
Just to share a couple of similar/related projects in case useful for reference:
Have you seen snapcast? That's currently my go-to audio sync solution for running whole house audio. Always open to alternatives, but so far nothing beats the performance and accessibility
freemanjiang 18 hours ago [-]
yes but only after posting! it's very cool—i'm actually a little embarrassed to not have seen it before.
they're doing a smarter thing by doing streaming. i don't do any streaming right now.
the upside is that beatsync works in the browser. just a link means no setup is required.
matteason 3 hours ago [-]
This is really cool, thanks for sharing. I've got a couple of pseudo-radio stations on https://ambiph.one/ which are very roughly synchronised for all users but it's based on their device clock so it can get a couple of seconds out of sync very easily. Looking forward to picking through your code to see if there are any techniques I can borrow!
TowerTall 8 hours ago [-]
Could have used this 25 years ago when I was working in a large room with ~100 other people. Every friday an mp3 was distributed and then at the same time we all started playing it signaling that the workday has ended and the friday bar was open. Fun times.
pete1302 2 hours ago [-]
Great solution based on websocket!.
My Vision: A web based VLC-type webplayer (capable of VLC level features) with support to distribute Audio channels over connect devices.
Here me out:
- Mac as display(Movies screen)
- iPad as a Center channel
- 4 iPhones as LR and rear channels, (and something for LFE).
Is it Practical? sound cool in my head. What do you guys think??
dsr_ 51 minutes ago [-]
It is not practical unless you already have all the devices.
Let's suppose that you are paying half the new price for the bottom-tier of each of these:
MBA: 500
iPad mini: 250
iPhone 16e: 300 * 4
Your budget is $1950. For that, you can get:
A 50" 4K TV, a Denon X1700H 7.2 receiver, 6 Klipsch R51M speakers, and a half-decent subwoofer, all new.
This will provide a far superior experience for you and a half-dozen friends, and each part will last longer, have no permanent battery to wear out, and be upgradable independently and without relying on a specific software product. I would estimate the lifetime of your proposal at about 3 years, and of my counter-proposal at 20 years.
Yours is much more portable.
pete1302 29 minutes ago [-]
I was eagerly expecting such practical breakdown of the Vision. Thanks for the numbers.
But, the scenario is of a Open-Source solution for a Student living in a Hostel/ Home.
The Vision is tilted towards Hostel Dorms where you won't bother a Home Theater, but where every friend has Apple device( any robust device with good onboard audio).
My Friend himself has a MBPRO and 2 iPhones, Totalling to 6 Apple device( Audio Sinks).
Dwedit 18 hours ago [-]
How does it deal with the audio ring buffers on the various devices? Does it just try to start them all at the same time, or does it take into account the sample position within the buffer?
freemanjiang 17 hours ago [-]
Great question! There's two steps:
First, I do clock synchronization with a central server so that all clients can agree on a time reference.
Then, instead of directly manipulating the hardware audio ring buffers (which browsers don't allow), I use the Web Audio API's scheduling system to play audio in the future at a specific start time, on all devices.
So a central server relays messages from clients, telling them when to start and which sample position in the buffer to start from.
camtarn 17 hours ago [-]
Interesting. Feels like this might still have some noticeable tens-of-millisends latency on Windows, where the default audio drivers still have high latency. The browser may intend to play the sound at time t, but when it calls Windows's API to play the sound I'm guessing it doesn't apply a negative time offset?
serial_dev 17 hours ago [-]
So it doesn't need to use the microphone? I guess from the "works across the ocean" comment and based on this description. I would have thought you would listen to the mic and sync based on surrounding audio somehow but it's good to know that it's not needed.
freemanjiang 17 hours ago [-]
Yup no microphone. It's all clock sync
cosmotic 18 hours ago [-]
Another issue is seeking in compressed audio. When seeking (to sync), some API's snap to frame boundaries.
cosmotic 14 hours ago [-]
I solved this by decompressing the whole file into memory as PCM.
brcmthrowaway 18 hours ago [-]
This is my question, does it do interpolation or pitch bending
Aldipower 3 hours ago [-]
This is very very cool! Love it, interface, demo, no need to download anything. Impressive.
Are you already doing latency compensation? You could measure the latency, if one host will become a master and then you could compensate that by delaying the playback of the master a little bit.
Groxx 14 hours ago [-]
Impressively accurate - Android phone in Firefox <-> Chrome on OSX == basically perfect to my ear. That's super cool, thanks for sharing!
cypherpunks01 8 hours ago [-]
For fun I tried syncing over Tor as well. It works impressively well! Amazingly tight sync considering the latency is 3 random hops around the world.
_joel 2 hours ago [-]
Really cool but issues for me when uploading mixes (MP3's about 90 mins long, 150MB). They never make it to the playlists
xnickb 1 hours ago [-]
if it's behind nginx, check client_max_body_size. Set it to something big
_joel 32 minutes ago [-]
Yea could be, this was using their .gg hosted version, I'll try locally
maxmynter95 15 hours ago [-]
It's a really intereseting vibe when you play on multiple machines. Sometimes you can notice a slight off-ness which gives this reverb effect.
awongh 2 hours ago [-]
TIL what NTP is. Interesting to read about how the underlying algorithm works, and that they had pretty good accuracy 40 years ago.
_joel 2 hours ago [-]
Wait until you hear about PTP :)
rezonant 15 hours ago [-]
It's not open source until you pick a license. Since there is no license in this repository, it is at best source-available.
freemanjiang 11 hours ago [-]
Thanks for the heads up! Just added a license to the repo.
lacoolj 16 hours ago [-]
Very very cool idea, but this is a bummer: "Optimized for Chrome on macOS. Unstable for other platforms..."
Once that changes (at the very least, the macOS part), I can't wait to play with it!
freemanjiang 15 hours ago [-]
It works on other platforms! Just not as smooth as Chrome.
freemanjiang 5 hours ago [-]
Made an update so it should be good on most!
Michael9876 10 hours ago [-]
[dead]
gitroom 3 hours ago [-]
Pretty cool, just the fact it works instantly in the browser with nothing to download is actually kinda wild to me tbh.
jauntywundrkind 18 hours ago [-]
Unfortunately the w3c webtiming community group has closed. It'd be amazing to have the browser better able to keep time in sync across devices.
Luckily the audio industry as solved this problem, and they use PTP as the clocking mechanism for AES67 (kind of the bastard child of Ravenna and Dante, but with a fully open* AoIP protocol) that's designed for handling all the hard parts of sync'ing audio over a network. And it's used everywhere these days, but mostly in venues/stadiums/theme parks.
* open if you pay membership dues to the AES or buy the spec
jauntywundrkind 15 hours ago [-]
Hopefully wifi8 has something PTP built-in. I hear there's some vague hope that better timing info is one of the core pieces, so maybe maybe!
I'm super jazzed seeing AES67 emerge.. although it not working great over wifi for lack of proper timing info hurts. Very understandable for professional gear, but there's nothing I love more than seeing professional, prosumer and consumer gear blend together!
PipeWire already has pretty decent support! There's a tracker where people report on with their hardware experiences trying it. Some really really interesting hardware shows up here (and elsewhere on the gitlab): https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/32...
slmkbh 5 hours ago [-]
I did something similar with pulseaudio about 15 years ago, had an old thinkpad running Debian, and then multicast activated on my source. Worked surprisingly well!
joelkoen 9 hours ago [-]
This is the most impressive demo I've ever seen - no app download, no account sign up, no crap, just works instantly. Well done.
freemanjiang 5 hours ago [-]
Thank you! Very appreciated kind stranger :)
bjackman 17 hours ago [-]
Very cool! As someone who doesn't know much about the topic, I'm surprised that "millisecond-level accuracy" is enough. I would have imagined that you need to be accurate down to some fairly small multiple of the sample rate to avoid phasing effects.
Do you have any interesting insight into that question?
cesaref 16 hours ago [-]
If you look at professional distributed audio systems (Dante, AES67 etc) you'll find that they all require PTP support on the hardware to achieve the required timing accuracy, so yes, you need <1ms to get to the point of being considered suitable if you are doing anything which involves, say, mixing multiple streams together and avoiding phasing type effects.
However, it very much depends on what your expectations are, and how critical your listening is. If no one is looking for problems, it can be made to work well enough.
freemanjiang 17 hours ago [-]
Yeah the threshold is pretty brutal, but it is enough. Experimentally, I'd say you need under 2-3ms but even at 1ms you can start to hear some phase differences.
Most of the time, I think my synchronization algorithm is actually sub-1ms, but it can be worse depending on unstable network conditions.
mkishi 15 hours ago [-]
How are you measuring this? I'm surprised the Web Audio API scheduling system has that much insight into the hardware latency.
urbandw311er 40 minutes ago [-]
I was wondering that too. It’s an impressive demo when used on devices with low latency audio drivers but I’m not convinced there’s any ability to detect drift beyond this. Might be interesting to have an option to use microphones to detect and calibrate this… …but then you have the same issue of an unknown delay on the microphone input too.
17 hours ago [-]
hatthew 17 hours ago [-]
Sound travels at a speed of ~1 foot/millisecond
camtarn 15 hours ago [-]
Oh, that's a nice approximation! Similar to Grace Hopper's famous demo of a six inch wire being about how far electrical signals travel in a nanosecond.
radley 8 hours ago [-]
Cool, keep it up!
For anyone who's curious, Airfoil (a paid app) can play simultaneously from a Mac to a variety of devices:
Any plans to integrate this with Apple Music or Spotify? I would assume your algorithm would work only with files uploaded to the site, but curious if you had plans to attempt something with Apple Music/Spotify
freemanjiang 15 hours ago [-]
Yes! The very next step.
alexweej_ 7 hours ago [-]
This is kind of where my attempt at this idea during lockdown died... Copyright law
18 hours ago [-]
LordGrignard 11 hours ago [-]
hello! the app looks very polished and I'm sure there's a lot of usecases of this for everyone else, but for me I wanted to ask whether this can be used to sync playlist progress of your offline library (its flac ofc) across devices? its something I have not found a solution for at all, other than some Plex thingy which is paid, and if you're synchronizing for millisecond accuracy it should work for simply keeping track of the shuffle order of the playlist and last played song (I. e. the position in the ordered playlist)?
fitsumbelay 12 hours ago [-]
This has been popping up in various feeds of mine since yesterday (Mon 4/28)
Although I know nothing about NTP or networking really I appreciate the use of Boring Old Tech for making this awesome software
gkanai 10 hours ago [-]
Interesting idea.
Have you thought about integrating support for timecode? Dante support also might bring your software to professional venues.
kelvinzhang 8 hours ago [-]
The sync was so seamless I didn't even realize it was playing from my own device at first
RicoElectrico 14 hours ago [-]
Does this resync periodically? (I mean not only when a new track starts)
freemanjiang 11 hours ago [-]
It doesn't at the moment, but I think it probably should. There's a non-trivial amount of clock drift that can happen over long periods of time.
HelloUsername 16 hours ago [-]
Cool! I'd swap the 'search music' (cobalt.tools) button with the 'upload audio' button
crunchwrapjs 18 hours ago [-]
i've been wanting to make this for so long! it's crazy that it's done completely in the browser
ajb 16 hours ago [-]
That's cool!
Last I heard safari was buggy and behind on web audio - did you run into any issues there?
freemanjiang 5 hours ago [-]
Miraculously pulled it off with a change I made today
js4ever 15 hours ago [-]
Love it, this is impressive and very smart, no need for mic!
badmonster 16 hours ago [-]
how does it achieve millisecond-accurate multi-device audio synchronization across browsers?
emilfihlman 7 hours ago [-]
Your css is broken in that it doesn't take into account the url/menu bar on phones.
Yes it's a super annoying problem. You should change the css so that the url bar is always visible, and have a separate full screen button.
freemanjiang 5 hours ago [-]
Oh I see, yeah I was wondering why it looked like that since on my computer responsive view looked great. Will look into this, thank you.
krick 13 hours ago [-]
Very cool, but lacks volume controls.
dddw 18 hours ago [-]
Good demo!
johng 18 hours ago [-]
Very cool!
freemanjiang 18 hours ago [-]
thank you!
djkesu 15 hours ago [-]
This is very cool.
ng-henry 16 hours ago [-]
this looks so cool!
shahanneda 18 hours ago [-]
awesome!
imcritic 5 hours ago [-]
Nice idea, but this project is currently written in Typescript, so I view it as a prototype at best.
There are a ton of directions I can think about you taking it in.
The household application: this one is already pretty directly applicable. Have a bunch of wireless speakers and you should be able to make it sound really good from anywhere, yes? You would probably want support for static configurations, and there's a good chance each client isn't going to be able to run the full suite, but the server can probably still figure out what to send to each client based on timing data.
Relatedly, it would be nice to have a sense of "facing" for the point on the virtual grid and adjust 5.1 channels accordingly, automatically (especially left/right). [Oh, maybe this is already implicit in the grid - "up" is "forward"?]
The party application: this would be a cool trick that would take a lot more work. What if each device could locate itself in actual space automatically and figure out its sync accordingly as it moved? This might not be possible purely with software - especially with just the browser's access to sensors related to high-accuracy location based on, for example, wi-fi sources. However, it would be utterly magical to be able to install an app, join a host, and let your phone join a mob of other phones as individual speakers in everyone's pockets at a party and have positional audio "just work." The "wow" factor would be off the charts.
On a related note, it could be interesting to add a "jukebox" front-end - some way for clients to submit and negotiate tracks for the play queue.
Another idea - account for copper and optical cabling. The latency issue isn't restricted to the clocks that you can see. Adjusting audio timing for long audio cable runs matters a lot in large areas (say, a stadium or performance hall) but it can still matter in house-sized settings, too, depending on how speakers are wired. For a laptop speaker, there's no practical offset between the clock's time and the time as which sound plays, but if the audio output is connected to a cable run, it would be nice - and probably not very hard - to add some static timing offset for the physical layer associated with a particular output (or even channel). It might even be worth it to be able to calculate it for the user. (This speaker is 300 feet away from its output through X meters of copper; figure out my additional latency offset for me.)
0.3 microseconds. The period of a wave at 20kHz (very roughly the highest pitch we can hear) is 50 microseconds. So - more or less insignificant.
Cable latency is basically never an issue for audio. Latency due to speed of sound in air is what you see techs at stadiums and performance halls tuning.
The instant you start having wireless speakers (eg. bluetooth) or any sort of significant delay between commanding playback and the actual sound coming out, the latency becomes audible.
An OSS app with the ability to sync everyone up over mobile or wifi, on Android or iOS with BYO headphones, would be incredible. This should be a thing :)
I've never seen in action and don't know how it works, but at least for the audio part it should be able to synchronize the phone with the cinema screen.
If I'm not mistaken, it's provided by this company: https://www.twavox.com/en/
Someone brought up the idea of an internet radio, which I thought was cool. If you could see a list of all the rooms people are in and tune it to exactly what they're jamming to.
How can you guarantee that? NTP fails to guarantee that all clocks are synced inside a datacenter, let alone across an ocean (Did not read the code yet)
EDIT: The wording got me. "Guarantee" & "Perfect" in the post title, and "Millisecond-accurate synchronization" in the README. Cool project!
Going off on a tangent: Back in the days of Live Aid, they tried doing a transatlantic duet. Turns out it’s literally physically impossible because if A songs when they hear B, then B hears A at least 38ms too late, which is too much for the human body to handle and still make music.
That's assuming standard OS and hardware and drivers can manage latency with that degree of precision, which I have serious doubts about.
In a duet, your partner needs to hear you now and you need to hear them now. With pre-recorded audio, you can buffer into the future.
If you _don’t_ have real time communication, then you don’t really need to solve this problem. But the problem is fundamentally unsolvable because the speed of light (in a vacuum) is the speed of causality and, as I say, puts a hard cap on simultaneity. This tends to be regarded as obvious at interstellar distances but it affects us at transatlantic distances too.
Just to share a couple of similar/related projects in case useful for reference:
http://strobe.audio multi-room audio in Elixir
https://www.panaudia.com multi-user spatial audio mixing in Rust
they're doing a smarter thing by doing streaming. i don't do any streaming right now.
the upside is that beatsync works in the browser. just a link means no setup is required.
My Vision: A web based VLC-type webplayer (capable of VLC level features) with support to distribute Audio channels over connect devices.
Here me out: - Mac as display(Movies screen) - iPad as a Center channel - 4 iPhones as LR and rear channels, (and something for LFE).
Is it Practical? sound cool in my head. What do you guys think??
Let's suppose that you are paying half the new price for the bottom-tier of each of these:
MBA: 500
iPad mini: 250
iPhone 16e: 300 * 4
Your budget is $1950. For that, you can get:
A 50" 4K TV, a Denon X1700H 7.2 receiver, 6 Klipsch R51M speakers, and a half-decent subwoofer, all new.
This will provide a far superior experience for you and a half-dozen friends, and each part will last longer, have no permanent battery to wear out, and be upgradable independently and without relying on a specific software product. I would estimate the lifetime of your proposal at about 3 years, and of my counter-proposal at 20 years.
Yours is much more portable.
But, the scenario is of a Open-Source solution for a Student living in a Hostel/ Home.
The Vision is tilted towards Hostel Dorms where you won't bother a Home Theater, but where every friend has Apple device( any robust device with good onboard audio).
My Friend himself has a MBPRO and 2 iPhones, Totalling to 6 Apple device( Audio Sinks).
First, I do clock synchronization with a central server so that all clients can agree on a time reference.
Then, instead of directly manipulating the hardware audio ring buffers (which browsers don't allow), I use the Web Audio API's scheduling system to play audio in the future at a specific start time, on all devices.
So a central server relays messages from clients, telling them when to start and which sample position in the buffer to start from.
Are you already doing latency compensation? You could measure the latency, if one host will become a master and then you could compensate that by delaying the playback of the master a little bit.
Once that changes (at the very least, the macOS part), I can't wait to play with it!
https://www.w3.org/community/webtiming/
https://github.com/webtiming/timingobject
* open if you pay membership dues to the AES or buy the spec
I'm super jazzed seeing AES67 emerge.. although it not working great over wifi for lack of proper timing info hurts. Very understandable for professional gear, but there's nothing I love more than seeing professional, prosumer and consumer gear blend together!
PipeWire already has pretty decent support! There's a tracker where people report on with their hardware experiences trying it. Some really really interesting hardware shows up here (and elsewhere on the gitlab): https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/32...
Do you have any interesting insight into that question?
However, it very much depends on what your expectations are, and how critical your listening is. If no one is looking for problems, it can be made to work well enough.
Most of the time, I think my synchronization algorithm is actually sub-1ms, but it can be worse depending on unstable network conditions.
For anyone who's curious, Airfoil (a paid app) can play simultaneously from a Mac to a variety of devices:
https://rogueamoeba.com/airfoil/mac/
Although I know nothing about NTP or networking really I appreciate the use of Boring Old Tech for making this awesome software
Have you thought about integrating support for timecode? Dante support also might bring your software to professional venues.
Last I heard safari was buggy and behind on web audio - did you run into any issues there?
Yes it's a super annoying problem. You should change the css so that the url bar is always visible, and have a separate full screen button.