Tuesday 23 June 2015

Dissipative self-assembly

I'm currently at a conference on Engineering of Chemical Complexity hosted by the TUM in Munich. Today we had an interesting talk from Thomas Hermans on ``dissipative self-assembly". To understand this concept, we need to think about steady states.

At a first glance, many of the things around us don't change over time -- they are in steady states. For example, the hotel I'm currently sitting in looks pretty much the same as it did five minutes ago. A fundamental principle of physics is that isolated systems tend to relax towards an "equilibrium state" (this is essentially the famous second law of thermodynamics). Equilibrium is an example of a steady state, because when a system reaches equilibrium, it has nowhere else to go. It isn't, however, the only example. In fact, most objects that we see in steady states are actually stuck in "kinetic traps", including the hotel. Really, the equilibrium state of the materials that make up this hotel wouldn't look very welcoming! For a start, everything would be much closer to the ground. If we wait long enough, of course, the hotel would fall down. but the rate at which this happens is very slow and it seems apparently "trapped" in a hotel-like steady-state to the casual observer.

Dr. Hermans wants us to consider a third, fundamentally distinct type of "dissipative" steady state. In kinetically trapped or equilibrium steady states, the system maintains itself - you don't need to supply anything to keep it in the steady state. However, think about a human body which is, roughly speaking, in a steady state; it needs to be constantly supplied with food to stay this way. This situation is typical of many biological systems - they are out of equilibrium, but not kinetically trapped, and rapidly relax towards equilibrium unless they are fed with fuel in some form. Feeding them with  fuel keeps them in a "dissipative" steady state, which is called dissipative because fuel gets used up in the process.

Dr. Hermans is looking to design artificial biochemical assemblies that exist in such dissipative steady states. Why might this be worthwhile? Hallmarks of biological systems include their flexibility, repairability and adaptability, features that are probably much more natural in dissipative assemblies in which there is a constant turn-over of material. As yet, the results seem preliminary and I can't find any publications - although a discussion of the principle of dissipative self-assembly can be found in this article, "Droplets out of equilibrium".

Tuesday 16 June 2015

Using a single molecular reaction to perform a complex calculation

Much of the research in molecular computing is based on making molecules imitate the most simple logical operations that underlie conventional digital electronics. For example, we can design molecular systems that mimic "AND", "OR"  and "NOT" gates. These gates can then be joined together to perform more complex tasks (eg. here). But at some level, physical systems naturally perform complex calculations. For example, the energy levels of a hydrogen atom are predicted theoretically through a number of complex equations, and in principle you could infer the answers to these equations by measuring the energies.

Of course, there are a number of issues with this. You need to be confident about your theory that relates measurements to equations, and you also need to be confident about the measurements themselves. But perhaps the most important caveat is that the equations that you can solve by performing measurements are often only interesting in predicting the outcome of the measurements themselves; the whole thing becomes rather incestuous and not obviously useful.

In a recent paper (here), Manoj Gopalkrishnan shows how the the complexity of a single chemical reaction can be harnessed to perform a useful computation. The paper is quite involved, but the essence is the following. Let's say I have the chemical species, X1, X2 and X3 that can interconvert by the reaction X1 + X3 ⇌ 2 X2, with both left and right sides being equally stable (such a system could, in principle, be created from DNA). If I start with a certain initial concentration of molecules, x1, x2 and x3, the system will evolve to reach some equilibrium in the long time limit, x'1, x'2 and x'3. This equilibrium represents a maximization of the system's entropy, subject to the constraint that (x1,x2,x3) can only be changed via the X1 + X3 ⇌ 2 X2 reaction.

Dr Gopalkrishnan shows that the constrained optimization performed by the chemical reaction solves a problem of wider interest. Let's say we have a random variable that can take three distinct values y1, y2 and y3 (in essence, a three-sided die). This die might be biased, with the probabilities of seeing each side not equal to 1/3. Further, we might have a reason to think that the probabilities are constrained by underlying physics, so that only certain combinations of probabilities p(y1), p(y2) and p(y3) are possible. So we know something about our die, but not the specifics. If someone rolls the die several times and presents us with the results, can we estimate the most likely values of p(y1), p(y2) and p(y3)?

Dr Gopalkrishnan shows that, for a certain very general type of constraint on probabilities (a "log-linear model"), the procedure for finding this "maximum likelihood estimate" of p(y1), p(y2) and p(y3) is exactly identical to that performed by an appropriate chemical system in maximizing its entropy under constrained particle numbers. Different log-linear constraints on p(y1), p(y2) and p(y3) translate into different choices of the reaction, which doesn't have to be X1 + X3 ⇌ 2 X2. Given the appropriate reaction, all we have to do is feed in the data (the results of the die rolls) as the initial conditions (x1,x2,x3), and the eventual steady state (x'1,x'2,x'3) gives us our estimate of (p(y1),p(y2),p(y3)).

In principle, this argument generalizes to much more complex systems than the illustration provided here, and the principle of maximum likelihood estimation isn't only applicable to biased dice. Of course, this is a long way from a physical implementation, and even further from an actual useful device, but it does illustrate the potential of harnessing the complexity of physical systems rather than trying to force them to approximate digital electronics. Going further in this direction, the chemical system will fluctuate about its steady-state average (x'1,x'2,x'3); it may be that these fluctuations can be directly related to our uncertainty in our estimate of (p(y1),p(y2),p(y3)).*


*In detail, the distribution of (x1,x2,x3) in the steady state may be related to the posterior probability of (p(y1),p(y2),p(y3)) given a flat prior and the data.