pierrec 19 hours ago

In this area of research, there's this classic trap that many fall into (including myself, many times). You focus on modeling things like the vibrating string, the resonant body, etc. to perfection. But it still sounds, uh, not great, because the more important and difficult part is modeling detailed control and the human/instrument interface. Air fluctuations around the violin sound like a fun experiment, but I don't think you'll get much additional realism from that, compared to a simple/classical impulse response model.

Even in this case, they're choosing the easy path (plucked, pizzicato), but the human/instrument interface is still audibly oversimplified while the resonant body has an unnecessary amount of "realism". The sound of pizzicato has a distinct character because the player's finger/skin slides a bit on the string as they're plucking, among other factors, which sounds like it's missing here. This can be tricky to implement because it's not necessarily a one-way impulse. The string is already vibrating and affects the finger, hence "interface".

This applies 10x more with bowed strings.

  • shiandow 18 hours ago

    Put differently, it takes years of practice to get a decent sound out of a real violin.

    If your model doesn't sound like someone's strangling a cat then it's probably not realistic.

    • saidnooneever 4 hours ago

      its not _that_ hard -_-. just that a lot of music on violin played solo is acquired taste... just like many flutes, or drums.

      the real sound comes when its played with other instruments in concert. it doesnt need years of practice it needs patience the right setting and an extra joint on ur pinky :p

      • shiandow 2 hours ago

        My experience may be a bit biased by having to listen to children attempting to learn the instrument.

  • Fraterkes 17 hours ago

    The point of this research doesn't seem to be to generate a nice sounding digital instrument, but to give violin makers a rough idea of what the instrument will sound like for different shapes / materials. This is useful for comparing designs, even if you don't simulate a human performance with complete fidelity.

    So I don't know if your criticism makes much sense.

    • traderj0e 17 hours ago

      Either way, the demo doesn't sound like a violin yet. It's really cool that they got it from just a CT scan, and I get the usefulness if they're able to fill in the gaps.

    • pierrec 15 hours ago

      Fair point, my (and many people's) point of view is very virtual instrument centric. I'd still say that better impulses would improve the tool for the potential luthier application.

    • rendaw 6 hours ago

      How useful is it though if it doesn't resemble what it will sound like when actually played? IIUC the difference between this and a human playing is to some degree subtle. But the audio difference between different violin designs will also be subtle.

  • analog31 14 hours ago

    I'm a working jazz bassist. Plucking a string is an art unto itself. What struck me about the clip was it sounded like the strings were plucked by something other than a finger.

    • Slow_Hand 5 hours ago

      Yeah. It sounds more like a dulcimer (hammered). If someone asked me to guess the instrument violin would not have been my first guess.

  • mochomocha 11 hours ago

    Every physical model has its strengths and weaknesses. In this case you're correct that no emphasis has been put on the human-instrument coupling, but they've worked harder than usual on the air/instrument coupling which makes sense givwn their goal of helping violin makers. However there are plenty of research work on physical modeling between human and instrument (Serafin, Woodhouse, Chaigne etc from 1-2 decades ago) esp for violin. For string plucking & striking specifically, the coupling is modeled in a few commercial products (Pianoteq is a popular one) [disclaimer: I used to sell a guitar software doing this as well and met some of these acoustics experts a long time ago - these were fun times going down the rabbit hole of physical modeling]

  • saidnooneever 4 hours ago

    i think its kind of funny, simulating this must be incredibly tedious while there are many songs with good fake violins...

    you mention a few details theres so many more if you think about it..the human-instrument interaction has all sorts of imperfections.

    tension in shoulders can make u bend the neck a bit. tension in fingers too much might pull out of tune. pushing not 100% straight along the bow might shift it sideways a bit changing how it crosses strings. Then ofc at what position is the bow on the strings (closer/futher from bridge).

    humans are not perfect machines but in those imperfection lies the beauty. A perfectly played instrument is played by a human and has this 'humanization' across all areas where human and instrument and music itself interact imho.

    if you produce music digitally this instantly will show, because all your instruments will sound flat and boring if you dont humanize.

Tade0 20 hours ago

> It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

I suppose this is the innovative part. They're not simulating just the string, but also the fluid it's immersed in, which is a computationally hard problem.

I made a vibrating string simulator in college for our Numerical Methods course and for quite a while I couldn't understand why it sounded so bad.

Turns out rounding errors in floating point operations can propagate to a point where they produce this distinct, "metallic" sound.

They're incredibly small, but if your system of differential equations is large enough, they'll become noticeable. Switching to an algorithm with better numerical stability would probably mitigate this issue, but I didn't get that far with my project.

  • jancsika 20 hours ago

    > Turns out rounding errors in floating point operations can propagate to a point where they produce this distinct, "metallic" sound.

    Reminds me of a Karplus-Strong synthesis implementation that produced a gorgeous guitar/mandolin sound, but only for delay durations that weren't simple ratios with the given sample rate. The simple-ratio durations would end up sounding like crude, attenuated periods of noise-- metallic sounds like you'd expect from a pitch produced in a KSS demo. Everything else had some kind of subtle interpolation error that ended up shaping the noise just enough to make it sound like a million bucks.

    The problem with most KSS is that the filter used will typically saturate the timbre. So rather than hearing a guitar string, you're hearing a guitar-adjacent interpolation scheme whose prominence makes you wonder just how un-guitarlike the original unfiltered sound must have been.

    • Tade0 12 hours ago

      That's really hellish, considering 44.1k is divisible by numbers 1-10 and more.

      • jancsika 7 hours ago

        Now that I think of it, the problem may have been independent of the sample rate. It may have just been all periods that were integers. So A440 sounded like a guitar string because it's a period of about 2.72ms. But at 1000Hz you're suddenly transported to a sound-effect from a 1970s episode of Dr. Who.

zebproj 23 hours ago

The article makes it sound like this is a very a new idea, but physical models of music instruments, including violin, has been around for over 40 years. Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962.

Julius Smith wrote pretty comprehensive textbook on the subject of building physical models of musical instruments, available online. Here, for example, is a chapter on modeling bowed string sounds: https://ccrma.stanford.edu/~jos/pasp/Bowed_Strings.html

  • HelloUsername 23 hours ago

    > Daisy Bell, the first piece of computer music and performed by their model, utilized a physical model of the human singing voice based on measurements of human vocal tract, and that was done in 1962

    From the article:

    > As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

  • cozzyd 20 hours ago

    I have to say JOS was possibly my favorite instructor in undergrad....

  • BJones12 18 hours ago

    > physical models of music instruments... has been around for over 40 years

    And in consumer products for 20+. Pianoteq [0], which is awesome, was first released in 2006.

    [0] https://en.wikipedia.org/wiki/Pianoteq

    • vunderba 17 hours ago

      Pianoteq has mostly replaced my old Kontakt libraries in my DAW outside of course, miking my actual piano.

      Also Audio Modeling has been in the business of creating physically modeled virtual instruments, including the violin (under the SWAM series), for a while now as well. You can do pretty fun things like map a USB breath controller to bow pressure, etc.

      https://audiomodeling.com/products/swam-violin

  • jerf 18 hours ago

    I recall in the late 1990s that physical synthesis was thought to possibly be the next big thing, that it might take over synthesis of musical instruments entirely from the options of wavetables and FM synth at the time. It didn't, but my point here is that is where it was, a prominent alternative that everyone in the relevant fields was aware of and many people tried to make work, not a recent invention and not just an obscure academic pursuit.

xphos 22 hours ago

As someone who plays the violin very poorly I don't think this sounds like violin at all. It is very folksy synthetic sounding. They are clearly plucking but it sounds similar to if you were bowing its really strange. I definitely could replicate that quality of model but I think I have heard much better models elsewhere

  • infinitewars 21 hours ago

    It was a finite element simulation of a CT-scanned violin, but as they note,

    “If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

    • xeonmc 21 hours ago

      In addition, the resonant characteristics of the bow also contributes significantly to the sound, as it feeds directly back to the stick-slip contact that is more akin to a mode-locked laser’s nonlinear dynamics. The violin body’s resonant characteristics in comparison is more like a passive filter.

  • esafak 20 hours ago

    It sounds more like a banjo to me. Not at all like a violin.

  • xphos 18 hours ago

    I was rude here I meant to say I couldn't produce that model. I see the other comments but just vibes here. It sounds strange, I read some of the comments and the article again and I just think what makes the violin juicy is the dynamic instability of everything. The best violinist in the world would struggle to play a song the same exact way every single time. Not that they would be making mistakes but the slightest varriation of bow pressure or starting position echos through a piece. Perhaps the simulation of just one pluck is why it feels so synthetic

florilegiumson 23 hours ago

“As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics.”

Ouch: this is completely inaccurate. Physical modeling has its roots in the 80s and Stefan Bilbao has been doing FDM based methods for over 20 years. I think he discusses fem in numerical sound sysnthesis

  • yummybrainz 23 hours ago

    I'm assuming the intended meaning is that this was the first time the approach led to "realistic" sound?

    • moralestapia 22 hours ago

      That's also not the case. There have been some really accurate physically-modeled instruments for at least 20 years.

      Also, aschkually, a violin is on the "easier" end of making it sound realistic. It's one of the "tutorial" models you go through when you start learning about this (resonators + reverb get you 80% there). Much harder to do any plucking sound (guitar, piano), and much much harder to model percussions accurately (cymbals, drums) and in such a way that the sound doesn't come out dry and very evidently synthetic.

      Source: I was very invested into this in the 2000s, although as a hobby, not professionally.

      • InitialLastName 20 hours ago

        Do you know if there has been any progress on conical-bore brass? From what I recall (I did some graduate work in instrument modeling in the late 2000s) reed instruments could be modeled convincingly, but the feedback oscillator with the lip buzzing was very difficult to model.

        • matheist 8 hours ago

          There's eg https://summit.sfu.ca/item/11130 from a Tamara Smyth and Frederick Scott; Google scholar shows some citations but not necessarily conical brass in particular. That link is about trombones, so also not conical. (I read that and tried to implement some stuff in it, see https://nuchi.github.io/trombone/ for a browser-based playable version.)

          Conical and cylindrical bores definitely differ but I don't see why they'd be different specifically with respect to the lip interaction, can you say more about that part?

    • Dropoutjeep 21 hours ago

      If this is their definition of "realistic" sound then I'm horrified

pawelos 16 hours ago

> luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

My father is a luthier, and while he definitely needs to wait until the instrument is finished to hear the full sound, he also uses multiple techniques on parts of an unfinished violin to hear *some* sound. For example, he knocks on the top or back plate and listens to the sound it makes.

I don’t know how much of it is just voodoo, but he’s been doing it for 50 years, so I’m sure he noticed some correlation to the final sound by now. :) I'll have to ask him.

p0w3n3d 7 hours ago

As a person who struggles to learn violin, I say that to me this doesn't sound like a violin. More like muted harp

njonsson 7 hours ago

The sound is more like that of a kayagŭm (a Korean traditional instrument) than violin pizzicato.

yboris 17 hours ago

For piano, look at Pianoteq - 50MB of math code that simulates $100k+ pianos, letting you adjust and fine-tune numerous parameters (lid position, microphone placement, etc), no sound-banks used! https://www.modartt.com/

arstep 23 hours ago

it doesn't sound as a real violin at all. A professional violinist would immediately tell that something is wrong.

  • ioseph 23 hours ago

    As an amateur violinist and synth enthusiast it sounds tinny and dry

  • rpozarickij 22 hours ago

    I've played other instruments (not very professionally) for 10y when I was much younger but was often surrounded by sounds of violins and these don't sound realistic at all.

    My main instrument was the saxophone and whenever I hear AI/artificial saxophone somewhere I can notice it right away, but I'm very curious if I've ever been a victim of the toupee fallacy.

    I wonder whether there's a good test/game where you have to guess whether a given sound of a musical instrument is real or not.

  • nosioptar 20 hours ago

    Cellist with ~6 years experience in a chamber orchestra: gun to my head, I'd have never guessed the sound was supposed to be a violin.

  • codedokode 13 hours ago

    You don't even need to be a violinist to hear that.

codedokode 17 hours ago

I never held a violin, but the demo sounds like some keyboard instrument, as if there were a hammer striking a metal bar or something like this. (I even checked the sound of violin on Youtube to be sure).

  • traderj0e 17 hours ago

    It also sounded like string percussion to me

  • ngokevin 16 hours ago

    Played violin, sounds nothing like a (plucked/pizzicato) violin.

  • sambapa 14 hours ago

    Yeah, not to mention overlapping sounds produced on the same string

shooly 1 day ago

Not sure if that's news, Audio Modeling[1] has been doing that for quite a long time now. The big plus of physical modeling instead of sampling is disk size - instead of tens of GB of samples, you get a 15MB plugin.

It's much more difficult to use, though - you have to control lots of aspects of the simulation (using automation in DAW or MIDI controllers) to make it sound actually realistic.

OK I guess it seems like this is more of a tool for luthiers than for composers or music producers.

[1] https://audiomodeling.com/

  • vintermann 1 day ago

    The first version of Pianoteq came back in 2006. There are apparently some exotic mid-90s synths with claims of being physically modeled too, don't know how accurate that is.

    I currently use a raspberry pi with Pianoteq as sound output for my digital piano. It got a reluctant stamp of approval from my pianist son, although of course he prefers the physical response of even a poor acoustic piano.

    • cwillu 23 hours ago

      Do you have an analog sustain pedal? The fine control with partial pedaling made some difference for me re: pianoteq's feel.

      • vintermann 23 hours ago

        I don't know how many levels it has, but it's definitively more than 2 :) I am a lousy pianist anyway, it's my son who's serious.

    • TheOtherHobbes 23 hours ago

      Pianoteq is more like spectral modelling. The sound lacks some of the movement and bloom of a real piano.

      90s physical modelling was a very simplified modular kind of modelling. Instead of analogue oscillators and filters you had "string" models, "pipe" models, various resonators, and so on.

      The models were interesting, but still quite crude and basic.

      This project is the most physical kind of physical modelling. It's an unsimplified brute-force model of the entire instrument body and string system, in full.

      It doesn't try to "model a resonator", it models blocks of wood with various holes, and calculates how they distort and radiate as sound passes through them.

      It's ridiculously expensive computationally, but it's also the only way to get all of the nuances of the sound.

      I expect they're already working on a stick-slip model for bowing.

      Theoretically you could use the same technique to model a piano or guitar, and you would get something indistinguishable from a real instrument.

      You'd likely need a supercomputer to run the model in anything approaching real time.

      But the advantage is that once you've got it you can do insane things like replace the strings with wood instead of metal, or use different metals, or "build" nonphysical pianos that are fifty feet long and have linear overtones all the way down to the bass.

      • vintermann 23 hours ago

        Pianoteq was quite heavy computationally when it came, it still is, arguably. It was a challenge to get it to run on a raspberry pi 4 in real time.

        I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano. Maybe there's some insane level of hi-fi gear which would let me, idk? But in general, when it's good enough for Steinway, Petrof and my conservatory student son to give their stamp of approval, I think it's good enough for me as well :) quite a few of those insane things you mention you can already do with pianoteq's physical model (i.e. emulating a 20m grand), and I suspect they keep a few knobs to themselves to sell virtual instruments.

        • iainmerrick 22 hours ago

          I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano.

          That's a great way to put it. There's no way to fully reproduce that live sound, but compared to anything played through speakers, Pianoteq is indistinguishable from a real piano.

          Out of the box it sounds a little too perfect, but just setting the Condition to the midway point (1.0) fixes that.

    • seedlessmike 22 hours ago

      Pianoteq is amazing with a good controller like a big Kawai VPC1 or the fanciest Fatar action in the Studiologic "GT" models. It is very responsive. I've been using it for over a decade and the sound keeps improving.

      The combination of pianoteq and a sample based piano is pretty nice too, though tough to do on a Pi.

      Good speakers improve the experience because you get your room resonance etc.

      The coolest thing - you can change temperament. So if you are playing music from before equal temperament, you can hear what different keys used to sound like! Very interesting especially with Bach.

      I agree with your son, there is nothing like a real piano. There are interesting attempts at combining the digital and mechanical with soundboard transducers from Kawai and Yamaha, I haven't used them but I would like to.

mchinen 1 day ago

Bowed instruments are very cool to model because of the nonlinear slip of the bow against the string. A bit curious why bowing was not discussed or used in the example of a violin, just plucking. Do luthiers test violins more by plucking than bowing?

  • nwatson 1 day ago

    It's probably harder to model and the results "aren't quite there yet".

  • tkocmathla 23 hours ago

    They briefly address this in the article:

    > Violin bowing, the researchers say, is a much more complicated interaction to model.

    • mchinen 22 hours ago

      Thanks, I missed that.

odyssey7 11 hours ago

This is getting a lot of criticism but as a demo I think it sounds nice.

russellbeattie 12 hours ago

Apparently, the shape of a peeled orange (a flattened sphere) is unrelated to a violin's f-holes. I had it in my mind that the S-shapes were similar for acoustic reasons.

Looking it up just now, it turns out that, "Modern physics research shows that the f-shape allows the instrument to push much more air than a traditional round hole, resulting in greater acoustic power and projection."

Just wanted to share in case someone else had that same bit of false knowledge in their head.

RickJWagner 13 hours ago

“Violin makers, aka luthiers”…. Lest anyone be confused, luthiers work on any stringed instrument with a neck. Guitars, banjos, etc.

RickJWagner 13 hours ago

“Violin makers, aka luthiers”…. Lest anyone be confused, luthiers work on any stringed instrument with a neck. Guitars, banjos, etc.

MITSardine 23 hours ago

As a disclaimer I haven't read the article, nor do I know much about simulating instruments in particular, but I just wanted to point out that accurately simulating the physics of a musical instrument is most likely still a very difficult problem.

I have no doubt there's been analytical/semi-analytical models around for decades. I mean a program that can take an arbitrary geometry or class thereof with specific materials and simulate the high frequency vibrations and model interactions with the body with high fidelity (not through ad-hoc models) is probably still out of scope of real time simulation.

My point is really that there's often families of models that deal with one thing, from semi-analytical first coded in Fortran in the 80s that can run in milliseconds but is only valid in certain configurations with a low degree of accuracy, to "first principles" simulations that may well require a supercomputer to produce results to a useful degree of accuracy (and not in real time). So, just because you see someone claim they can "simulate X", and then another makes the same claim 40 years later, that doesn't mean they're doing the same thing.

For instance, aeronautics has XFOIL. It's a semi-analytical model first devised in the 80s that computes aeronautics coefficients for a certain class of airfoils (NACA). My understanding is it's a very clever, and industrially significant, piece of code, but ultimately it works in a narrow regime with some heavy simplifications. You can now get results from this in real time on a webpage. A proper CFD calculation to a NACA wing will take in the order of minutes to hours on a workstation (depending on requested precision and settings, e.g. speed of air), and while closer to first principles, it's still using physical simplifications (RANS). So yeah, although nominally people have been "simulating airfoils" for 40 years, the techniques have refined considerably, and will continue to do so (practical LES and, someday, DNS). It might be another century that people are still "simulating airfoils" in ever more accurate (nailing down within the constraints), high fidelity (lifting constraints) and generic ways.

Back to instruments, this is a difficult coupled problem, in fairly high frequencies (high frequencies = more expensive), with possible fluid-structure interactions, not to mention the geometries are fairly complex (to even get a workable mesh to begin with). My uneducated guess is we're still at either semi-analytical, or at the "considerably simplified first principles" stage for this type of problems. Just like DNS, I'm sure you could "just resolve the scales and run it through a simulation with a really tiny time step", and this is liable to be similarly expensive as DNS (million dollar single simulation). Additionally, they have to deal with the human ear, which is perhaps more unforgiving than an error plot on drag or lift. So I wouldn't dismiss news of instrument simulation as stale just because someone made something that produced similar artifacts in the past, as the methods will continue to evolve considerably.

jackmalpo 17 hours ago

are the realistic sounds in the room with us?