BT Phobos, a review

[NOTE: This article talks about commercial products and contains links to them, I do not receive any money if you buy those tools, nor I work for or I am affiliated to any of those companies. The opinion expressed here are mine and the review is subjective]

This is my attempt at a review of Spitfire Audio BT Phobos. Before diving into the review, and since I know I will be critic particularly on some aspects, I think it’s fair to assess the plugin right away: BT Phobos is an awesome tool, make no mistakes.

BT Phobos is a “polyconvolution” synthesiser. It is, in fact, the first “standalone” plugin produced by Spitfire Audio, which is one of the companies I respect the most when it comes to music production and sample based instruments.

The term polyconvolution is used by the Spitfire Audio team to indicate the simultaneous use of three convolvers for four primary audio paths: you can send any amount of each of those four primary sources (numbered 1 to 4) outputs to each of the three convolution engines (named W, X and Y).

Screen Shot 2017-04-18 at 13.31.39

Source material controls

There is lot of flexibility in the mixing capabilities; there are, of course, separate dry/wet signal knobs that send a specific portion of the unprocessed source material to the “amplifier” module, control how much of the signal goes to the convolution circuits, and finally how much of each of the convolution engines applies to each of the source sound.

This last bit is achieved by means of an interesting nabla shaped X/Y pad: by positioning the icon that represents the source module closer to a corner it’s possible activate just the convolution engine that represents that corner; for example, top left is the W engine, top right the X and bottom the Y. Manually moving the icon gradually introduces contributions from the other engines, and double clicking on the icon makes all convolvers contribute equally to the wet sound, by positioning them to the center of the nabla.

Screen Shot 2017-04-18 at 15.45.15

The convolution mixer

Finally, each convolver has a control that allows to change the output level of the convolution engine before it reaches its envelope shaper. Spitfire Audio has released a very interesting flow diagram that shows the signal path in detail, which is linked below for reference.

BT Phobos signal path

In addition to the controls just described, the main GUI has basic controls to tweak the source material with an ADSR envelope which is directly accessible below each of the main sound sources as well as the convolutions modules, but it’s possible to have access to more advanced settings by clicking on the number or the letter that identifies the module name.

Screen Shot 2017-04-18 at 13.32.21

The advanced controls interface

An example of such controls is the Hold parameter, which let the user adjust the time the sound is held at full level before entering the Decay phase of its envelope; another useful tool is the sampling and IR offset controls, which allow to tweak parameters like the starting point of the material or the quantisation and its Speed (the playback speed for the samples, and is a function of the host tempo); there is also a control to influence the general pitch of the sound; finally a simple but effective section is dedicated to filtering – although a proper EQ is missing – as well as panning and level adjustments.

All those parameters are particularly important settings when using loops, but also contribute to shaping the sound with the pitched material, and can be randomised for interesting effects and artefacts generated from the entropy (you can just randomise the material selection only as opposed to all the parameters).

Modulation is also present, of course, with various LFOs of various kind that can be used to modulate basically everything. You can access them either by clicking on the mappings toggle below the ADSR envelope of each section, or by using the advanced settings pages.

The amount of tweaks that can be made to the material in both the source and the convolution engines is probably the most important aspect of BT Phobos, since it gives an excellent amount of freedom to create new sounds from what’s available, which is already a massive amount of content, and allows to build wildly different patches with a bit of work, but it’s definitely not straightforward and needs time to understand the combined effects that each setting has on the whole.

Since the material is polyphonic, the Impulse Responses for the convolution are created on the fly, and in fact, one interesting characteristic of BT Phobos is that there is no difference between a material for the convolution engines and one for the source module,  both draw from the same pool of sounds.

Screen Shot 2017-04-18 at 14.41.57

BT Phobos beautiful GUI

There is a difference on the type of material though, where loop based samples are, well, looped (and tempo sync’ed), and their pitch does not change based on the key that triggers them (although you can still affect the general pitch of the sound with the advanced controls), “tonal” material are pitched and change following the midi notes.

One note about the LFOs: the mappings are “per module”. In other words, it is possible to modulate almost every parameter inside a single module, be it one of the four input sources or one of the three convolution engines, but there seem to be no way to define a global mapping of some kind. For example, I found a very nice patch from Mr. Christian Henson (which incidentally made, at least in my opinion, the best and most balanced overall presets), and I noticed I could make it even more interesting by using the modulation wheel. I wanted to modulate the CC1 message with an LFO (in fact, ideally it would be even better to have access to a custom envelope, but BT Phobos doesn’t have any for modulation use), but I could not find a way to do that other than using Logic’s own Midi FX. I understand that MIDI signals are generated outside the scope of the plugin, but it would be fantastic to have the option of tweaking and modulate everything from within the synth itself.

All the sources and convolvers can be assigned to separate parts of the keyboard by tweaking the mapper at the bottom of the GUI. It is not possible to map a sound to start from an offset in the keyboard controls – for example to play C1 on the keyboard but trigger C2, or any other note – but of course you can change the global pitch so this has effectively the same result, and as said before it can also be modulated with an LFO or via DAW automation, for more interesting effects.

Screen Shot 2017-04-18 at 21.42.19

Keyboard mapping tool

Indeed, the flexibility of the tool, and the number of options at disposal for tweaking the sounds are very impressive. Most patches are very nice and ready to be used as they are, and blend nicely with lots of disparate styles. Some patches are very specific though, and pose a challenge to be used. Generally, I would consider these as starting points for exploration, rather than “final”.

When reading about BT Phobos in the weeks before its release many people asked whether you could add your own sound to it or not. It’s not possible, unfortunately.

At first, I thought that wasn’t a limitation or a deal breaker. I still think it’s not a deal breaker, but I see the value added that BT Phobos has even just as a standalone synth, as opposed to recreate the same kind of signal path manually with external tools, to give your own content the “Phobos treatment”, which is something that is entirely possible of course, for example just with Alchemy and Space Designer (which are both included in MainStage, so you can get them for a staggering 30 euros if you are a Mac user, even if you don’t use Logic Pro X!), but of course, we would be trading away the immediacy that BT Phobos delivers.

That, maybe, is my main criticism to this synth, and I hope Spitfire Audio turns BT Phobos into a fully fledged tool for sound design over time, maybe enabling access to spectral shaping in some form or another, so we can literally paint over (or paint away!) portions of the sound, which is something you can do with iZotope Iris or Alchemy and is a very powerful way to shape a sound and do sound design in general.

Another thing that is missing is a sound effect module, although I don’t know how important that is, given that there are thousands of outstanding plugins that do all sort of effects from delay to chorus etc… And, in fact, many patches benefit for added reverb (I use Eventide Blackhole and found that works extremely well with BT Phobos, since it’s also prominently used for weird sound effects). But it may be interesting to play by putting some effects (including a more proper EQ section) in various places in the signal path, although it’s all too easy to generate total chaos from such experimentation, so it’s possible the Spitfire Audio simply thought to leave this option for another time and instead focus on a better overall experience.

And there’s no arpeggiator! Really!

The number of polyphonic voices can be altered. Spitfire Audio states that the synth tweaks the number of voices at startup to match the characteristics of your computer, but I can’t confirm that, since every change I do seems to remain, even if I occasionally hear some pop and cracks at higher settings. Nevertheless, the CPU usage is pretty decent unless you go absolutely crazy with the polyphony count. I also noted that the numbers effect the clarity of the sound. This is understandable since an higher count means more notes can be generated at the same time, which means more things are competing for the same spectrum, and things can become very confusing very quickly. On the other end, a lower polyphony count has a bad impact on how the notes are generated. I feel sometime that things just stop generating sound, which is counter intuitive and very disturbing, especially since it’s very easy to have a high polyphony count with all those sources and convolvers.

Also to note is that, by nature, some patches have very wild difference in their envelopes and level settings, which means it’s all to easy to move from a quiet to a very loud patch just by clicking “next” (which is possible in Logic at least with the next/prev patch buttons on top of the plugin main frame). The synth does not stop the sound, nor does any attempt to fade from one sound to the next, instead, the convolutions simply keep working on the next sample in queue with the new settings! I still have to decide if this is cool or not, perhaps it’s not intentional, but I can see how this could be used to automate patch changes in some clever way during playback! And indeed, a was able to create a couple of interesting side effects just by changing between patches at the right time.

More on the sounds. The amount of content is really staggering, and simply cycling through the patches does not make justice to this synth, at all!

What BT Phobos wants is a user that spends time tweaking the patches and play with the source material to get the most out it, however it’s easy to see how limiting this may feel at the same time, particularly with the more esoteric and atonal sounds, and there’s certainly a limit on how good a wood stick convolved with an aluminium thin can may sound, so indeed some patches do feel repetitive at times, as the source material does. There are quite a few very similar drum loops for example, or various pitches “wind blowing into a pipe” kind of things.

This is a problem common to other synths based on the idea of tweaking sounds from the environment, though. For example, I have the amazing Geosonics from Soniccouture, which is an almost unusable library that, once tweaked, is capable of amazing awesomeness. Clearly, the authors of both synths – but this is especially valid for BT Phobos I think – are looking at an audience that is capable of listening through the detuned and dissonant sound waves and shape a new form of music.

This is probably the reason why so many of the pre assembled patches dive the user full speed into total sound design territory; however, and this is another important point of criticism, this is sound design that has already been done for you… A lot of the BT patches, in particular, are clearly BT patches, using them as they are means you are simply redoing something that has already been done before, and, despite with a very experimental feeling still strongly present, it’s not totally unheard or new.

For example, I also happen to have Break Tweaker and Stutter Edit (tools that also originally come from BT), and I could not resist to the temptation to play something that resembles BT work on “This Binary Universe” or “_” (fantastic albums)! While this seems exciting – BT in a box! And you can also see the democratising aspect of BT Phobos, I can do that in half hour instead of six months of manual CSound programming! – it’s an unfortunate and artificial limitation on a tool that is otherwise a very powerful enabler, capable of bringing complex sound design one step closer to the general public. Having the ability to process your own sounds would mitigate this aspect I think.

I do see how this is useful for a composer in need of a quick solution for an approaching deadline even with the most experimental tones, though: those patches can resolve a deadlock or take you out of an impasse in a second.

The potential for BT Phobos to become a must have tool for sound design are all there, especially if Spitfire Audio keeps adding content, perhaps more varied (and even better, allow to load your own content). The ability to shape the existing sounds already make it very usable. I don’t think it’s a general tool at this stage, though, and definitely it should not be the first synth or sound shaping processor in your arsenal, especially if you are starting out now.

But it’s not just a one trick pony either, it does offer you quite a lot of possibilities, and the more you work on that, the more addictive it becomes, and I can see Spitfire Audio offering soon this synth within a collection comprising of some of their more experimental stuff like LCO and Enigma, which would be very nice, indeed.

It’s unfortunate that Spitfire Audio does not offer an evaluation period: contrary to most of their offering, BT Phobos needs time to be fully grasped and it’s all but immediate (well, unless you are happy with the default patches or you really just need to “get out of troubles” quickly, but be careful with that because the tax is on the originality), but it can, and does, evolve, as its convolutions do, over time and it can absolutely deliver total awesomeness if used correctly.

Most patches are also usable out of the box, and especially by adding some reverb or doing some post processing with other tools, it’s possible to squeeze even more life out of them.

Overall, I do recommend BT Phobos, is a wonderful, very addictive synthesiser.

Luca ❤️

I don’t usually post photos of my family, especially my kids, but this is a very special occasion, that needs celebration.

On the 29th, at 7:31 in the morning (and what a long night!), my second child, Luca, was born in Hamburg. I guess this makes him an official “hamburger” now 🤣

Luca was named after his uncle, one of the most eclectic and interesting person I ever met, and it was a great honour for us.

I don’t have much words really, being a father is amazing, and I’m very proud, and very in love, with my kids. Very, very in love.

Welcome Luca, son of Hamburg, and citizen of the World!


Luca and Fiorenza 🙂

P.S. I just realised that it’s a lot over one year I don’t post anything, I will try to change that, I already have a few things that will be probably very interesting share!

Java Magazine Interview

There is a nice profile of me on the Java Magazine of this bimestre, and I am very flattened for this so let me share it right away with you.

There is one question I was expecting though but didn’t come: “When did you start working on Java?”.

So, in order to give some more context, let me play with it and answer my own question here (and without space limits!). I think this is important, because it is about how I started to contribute to OpenJDK, it shows that you can do the same… if you are patient.

JM: When did you start working on Java?

Torre: I started to work in Java around its 1.3 release, and I used it ever since. I did start working on Java quite later though, around the Java 1.5/1.6 era probably. I was working to create an MSN messenger clone in Java on my Linux box, since all my friends where using it (MSN I mean, not Linux unfortunately), including the dreaded emoticons, and no Linux client supported those at the time.

I had all the protocol stuff working, I could handshake and share messages (although I still had to figure out the emoticons part!), but I had a terrible problem. I needed to save user credentials. Well, Java has a fantastic Preferences API, easy enough, right? Except that what I was using wasn’t the proprietary JDK, it was the Free Software version of it: GNU Classpath.

Classpath at the time didn’t have Preferences support, so I was stuck. I think somebody was writing a filesystem based preferences, or perhaps it was in Classpath but not GCJ, which is what everybody was using as a VM with the Classpath library, anyway when I started to look at the problem, I realised it would have been nicer to offer a GConf based Preferences store, and integrate the whole thing into the Gnome desktop (at the time, Gnome was a great desktop, nothing like today’s awfulness).

I was hooked. In fact, I even never finished my MSN messenger! After GConf, all sort of stuff came in, Decimal Formatter, GStreamer Sound backend, various fixes here and here, and this is when I learned a lot of how Swing works internally by following Sven de Marothy, Roman Kennke and David Gilbert work.

When Sun was about to release OpenJDK, I was in that very first group and witnessed the whole thing, a lot of behind the scenes of the creation of this extremely important code contribution. OpenJDK license is “GPL + Classpath exception” for a reason. I remember all the heroes that made Java Free Software.

I guess I was lucky, and the timing was perfect.

However right at the beginning contributing actual code to OpenJDK wasn’t at all easy like in Classpath. There was (is!) lot of process, things took a lot of time for anything but the most trivial changes etc…

But eventually I insisted and me and Roman where the first external guys to have code landing in the JDK, Roman was, I believe, the first independent person to have commit rights (I think that the people that are still today in my team at Red Hat and then also SAP had some changes already in, but at the time we two were the only guys completely external).

It wasn’t easy, I had to challenge ourselves and push a lot, and not give up. I had to challenge Sun, and even more challenge Oracle when it took the lead. But I did it. This is what I mean that everybody can do it, you can develop the skills and then you need to build the trust and then not let it go. I’m not sure what is more complex here, but if you persist it eventually come. And then all of a sudden billions of people will use your code and you are a Java Champion.

So this is how it started.

O.R.k. Remix Contest

Just a couple of days ago I found out that some of my favourite musicians decided to join together to release an album, and allowed to preorder it on a crowdfunding website, Music Raiser.

The name of the band is “O.R.k.” and the founders are none but Lef, Colin Edwin, Pat Mastelotto and Carmelo Pipitone.

You probably have heard their names, if not, Colin Edwin is the bassist from Porcupine Tree while Carmelo Pipitone is the gifted guitarist from Marta Sui Tubi, an extremely original Italian band, they probably did the most interesting things in Italian music in the last 15 years or so; Lef, aka Lorenzo Esposito Fornasari, has done so many things that is quite hard to pick just one, but in Metal community he is probably best know for Obake. Finally, Pat Mastelotto is the drummer of King Crimson, and this alone made me jump on my seat!

One of the pre-order bonus was the ability to participate to a Remix Contest, and although I only got the stems yesterday in the late morning I could not resist to at least give it a try, and it’s a great honour for me that they have put my attempt on their Youtube channel:

It’s a weird feeling editing this music, after all, who am I to cut and remix and change the drum part (King Crimson, please forgive me!), how I ever dare to touch the guitars and voice, or rearrange the bass!? 🙂

But indeed it was a really fun experience, and I hope to be able do this again in the future.

And who knows, maybe they even like how I messed up their art and they decide to put me on their album! Nevertheless, it has been already a great honour for me to be able to see this material in semi-raw form (and a very interesting one!), so this has been already my first prize.

I’m looking forward now to listen the rest of the album!

Debugging the JDK with GDB

At Fosdem Volker presented a very great session on how to debug OpenJDK (and Hotspot) with gdb. Roman and Andrew (Dinn) did something similar while speaking about Shenandoah. In the next few days I’ll try to upload their slides on the FOSDEM website so that anyone can access that (and hopefully we will have the recordings this time as well).

There are a few things though that I keep forgetting myself and so I thought it would be useful to sum up in a blog post, hopefully general enough for most people as well as future reference for myself!

Suppose you are trying to detect some funny behaviour in your code, and that the crash is in a native library (or perhaps in some native OpenJDK code which is not hotspot).

What you would usually do with Java code is to start your debugger in Eclipse or IntelliJ or whatever, and go step by step until you figure out what’s wrong.

But when dealing with native code the thing gets complex, Eclipse and NetBeans can’t follow by default the native code, IntelliJ doesn’t even support native code at all (at least on Linux). There is an option though, first, you can still use those tools in process attach mode, they have very good debugging interfaces that make it easier to analyse quickly anything, but you can also use gdb directly, likewise in process attach mode.

Let’s see a couple of common cases here:

1. The application crashes, you want gdb launched automagically:

$ java -XX:OnError="gdb - %p" MyApplication

Roman (thanks!) show me this trick back in 2008! Honestly, I didn’t test that recently, but I suppose this still works 😉

2. You want to start a debugging session yourself rather than automatically on crash.

The trick here is to either start the application in debug mode via Eclipse/Whatever or attaching the Java debugger (including jdb if you enjoy suffering!) remotely:

$ java -Dsun.awt.disablegrab=true \
       -Xdebug \
       -Xrunjdwp:transport=dt_socket,server=y,address=1080 \

This will produce an output like the following:

Listening for transport dt_socket at address: 1080

Blocking the application until the debugger is attached.

At this point, you can set the breakpoints in your IDE and attach to the Java process remotely. The idea is to set the breakpoint right before the native call (tip: If you follow from there stepping with the java debugger, you’ll also see how native libraries are loaded).

Now to connect gdb all you need to to is to get the pid of the java process, with jps for example:

$ jps
30481 Jps
27162 MyApplication <------

And then:

$ gdb -p 27162

Set your breakpoint in the native function of choice. Remember the name mangling, so you need to look up how the methods are actually called in native code, the naming convention is:

Java_{package_and_classname}_{function_name}(JNI arguments)

But you need to double check exactly everything since there may be method overloads that dictate a slightly different convention.

If instead of using gdb from the command line you want to use your IDE the rule to follow is the same really. Afaik both Eclipse and NetBeans allow their native debugger plugins to attach to a process.

All that is needed now is to set your gdb breakpoints and issue a continue in the gdb shell in order to resume the Java process so that it can then hit the breakpoint you just set. From there, stepping in Java code until you enter the native function will magically continue the stepping inside the native function! If you use Eclipse to do both debugging this is even extremely cool since it’s just like following the program inside the same editor!

There’s one last thing to remember (other than possibly the need to set the source location in gdb or installing the OpenJDK debuginfo package for your distribution).

Hotspot uses segfaults for a number of interesting things, like deoptimise, NullPointerException etc.. Apparently, this is faster than doing specific checks and jumping around the code. This is a problem for gdb though, since it will stop every now and then to some random routines you don’t really (usually!) care about:

(gdb) cont
Program received signal SIGSEGV, Segmentation fault.

Irritating, since those are all legitimate segfaults.

To avoid that just do the following in the gdb console (or from the IDE in whatever way this is handled there):

(gdb) handle SIGSEGV nostop noprint pass

Now all the interesting work can be done without interruptions 😉

Another Schedule Change


Hi all,

I’ve done another small change to the schedule. Basically the “Java 9: Make Way for Modules” and “Beyond Java 9” presentations have been swapped out, this is the new schedule:

10:30 The State of OpenJDK
11:00 Java 9: Make Way for Modules!

14:00 Beyond Java 9"

The FOSDEM booklet has already been printed, so those changes will not be visible. They have been picked up by the online tool, though:

Java DevRoom Schedule changes

For technical reason and in accordance with the speakers, I changed slightly the schedule.

The “Fortress” talk has been canceled, while I switched “Building an open Internet of Things with Java and Eclipse IoT” and “Java restart with WebFX“, and this last one will get a slightly longer slot:

12:30 – 12:55  Building an open Internet of Things with Java and Eclipse IoT
17:00 – 17:55  Java restart with WebFX

The full schedule is already online: