From the “Preface to the English Edition” of “The Theory of Money and Credit” by Ludwig von Mises: “All proposals that aim to do away with the consequences of perverse economic and financial policy, merely by reforming the monetary and banking system, are fundamentally misconceived. Money is nothing but a medium of exchange and it completely fulfills its function when the exchange of goods and services is carried on more easily with its help than would be possible by means of barter. Attempts to carry out economic reforms from the monetary side can never amount to anything but an artificial stimulation of economic activity by an expansion of the circulation, and this, as must constantly be emphasized, must necessarily lead to crisis and depression. Recurring economic crises are nothing but the consequence of attempts, despite all the teachings of experience and all the warnings of the economists, to stimulate economic activity by means of additional credit.

Mathematicians of the day.

Posted on by Augustus Van Dusen | Leave a comment

From Quanta Magazine: Time’s Arrow Traced to Quantum Source

Time’s Arrow Traced to Quantum Source” is an interesting article about quantum entanglement, quantum information, entropy, and time’s arrow.

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Bristol group.

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

In 2009, the Bristol group’s proof resonated with quantum information theorists, opening up new uses for their techniques. It showed that as objects interact with their surroundings — as the particles in a cup of coffee collide with the air, for example — information about their properties “leaks out and becomes smeared over the entire environment,” Popescu explained. This local information loss causes the state of the coffee to stagnate even as the pure state of the entire room continues to evolve. Except for rare, random fluctuations, he said, “its state stops changing in time.”

Consequently, a tepid cup of coffee does not spontaneously warm up. In principle, as the pure state of the room evolves, the coffee could suddenly become unmixed from the air and enter a pure state of its own. But there are so many more mixed states than pure states available to the coffee that this practically never happens — one would have to outlive the universe to witness it. This statistical unlikelihood gives time’s arrow the appearance of irreversibility. “Essentially entanglement opens a very large space for you,” Popescu said. “It’s like you are at the park and you start next to the gate, far from equilibrium. Then you enter and you have this enormous place and you get lost in it. And you never come back to the gate.”

In the new story of the arrow of time, it is the loss of information through quantum entanglement, rather than a subjective lack of human knowledge, that drives a cup of coffee into equilibrium with the surrounding room. The room eventually equilibrates with the outside environment, and the environment drifts even more slowly toward equilibrium with the rest of the universe. The giants of 19th century thermodynamics viewed this process as a gradual dispersal of energy that increases the overall entropy, or disorder, of the universe. Today, Lloyd, Popescu and others in their field see the arrow of time differently. In their view, information becomes increasingly diffuse, but it never disappears completely. So, they assert, although entropy increases locally, the overall entropy of the universe stays constant at zero.

“The universe as a whole is in a pure state,” Lloyd said. “But individual pieces of it, because they are entangled with the rest of the universe, are in mixtures.”

According to the scientists, our ability to remember the past but not the future, another historically confounding manifestation of time’s arrow, can also be understood as a buildup of correlations between interacting particles. When you read a message on a piece of paper, your brain becomes correlated with it through the photons that reach your eyes. Only from that moment on will you be capable of remembering what the message says. As Lloyd put it: “The present can be defined by the process of becoming correlated with our surroundings.”

The entire article can be read here.

The last paragraph quoted is quite interesting. In evolutionary epistemology, there is an emphasis on the fit of an organism to its surroundings. [1] This same notion has been used to explain how our brains perceive reality. Thus it is interesting to see this idea in a context that links time’s arrow and how our brains remember past events.

[1] See Darwin Machines and the Nature of Knowledge by Henry Plotkin.

Posted in Science_Technology | Tagged , , | Comments Off

From Quanta Magazine: A Fundamental Theory to Model the Mind

A Fundamental Theory to Model the Mind” is a summary of the theory that the functioning of our brains can be explained by self-organized criticality. How important a role self-organized criticality plays is certainly open to debate as are specific models of self-organized criticality. However, the concept of self-organizing criticality is a useful way of thinking about how our brains work. Note that this is a theme explored in a book I am currently reading, Rhythms of the Brain by Gyorgy Buzsaki.

In 1999, the Danish physicist Per Bak proclaimed to a group of neuroscientists that it had taken him only 10 minutes to determine where the field had gone wrong. Perhaps the brain was less complicated than they thought, he said. Perhaps, he said, the brain worked on the same fundamental principles as a simple sand pile, in which avalanches of various sizes help keep the entire system stable overall — a process he dubbed “self-organized criticality.”

As much as scientists in other fields adore outspoken, know-it-all physicists, Bak’s audacious idea — that the brain’s ordered complexity and thinking ability arise spontaneously from the disordered electrical activity of neurons — did not meet with immediate acceptance.

But over time, in fits and starts, Bak’s radical argument has grown into a legitimate scientific discipline. Now, about 150 scientists worldwide investigate so-called “critical” phenomena in the brain, the topic of at least three focused workshops in 2013 alone. Add the ongoing efforts to found a journal devoted to such studies, and you have all the hallmarks of a field moving from the fringes of disciplinary boundaries to the mainstream.

In the 1980s, Bak first wondered how the exquisite order seen in nature arises out of the disordered mix of particles that constitute the building blocks of matter. He found an answer in phase transition, the process by which a material transforms from one phase of matter to another. The change can be sudden, like water evaporating into steam, or gradual, like a material becoming superconductive. The precise moment of transition — when the system is halfway between one phase and the other — is called the critical point, or, more colloquially, the “tipping point.”

Classical phase transitions require what is known as precise tuning: in the case of water evaporating into vapor, the critical point can only be reached if the temperature and pressure are just right. But Bak proposed a means by which simple, local interactions between the elements of a system could spontaneously reach that critical point — hence the term self-organized criticality.

There can be no phase transitions without a critical point, and without transitions, a complex system — like Bak’s sand pile, or the brain — cannot adapt. That is why avalanches only show up at criticality, a “sweet spot” where a system is perfectly balanced between order and disorder, according to Plenz. They typically occur when the brain is in its normal resting state. Avalanches are a mechanism by which a complex system avoids becoming trapped, or “phase-locked,” in one of two extreme cases. At one extreme, there is too much order, such as during an epileptic seizure; the interactions among elements are too strong and rigid, so the system cannot adapt to changing conditions. At the other, there is too much disorder; the neurons aren’t communicating as much, or aren’t as broadly interconnected throughout the brain, so information can’t spread as efficiently and, once again, the system is unable to adapt.

Another study collected data from epileptic subjects during seizures. The EEG recordings revealed that mid-seizure, the telltale avalanches of criticality vanished. There was too much synchronization among neurons, and then, Plenz said, “information processing breaks down, people lose consciousness, and they don’t remember what happened until they recover.”

Sporns emphasizes that it remains to be seen just how robust this phenomenon might be in the brain, pointing out that more evidence is needed beyond the observation of power laws in brain dynamics. In particular, the theory still lacks a clear description for how criticality arises from neurobiological mechanisms — the signaling of neurons in local and distributed circuits. But he admits that he is rooting for the theory to succeed. “It makes so much sense,” he said. “If you were to design a brain, you would probably want criticality in the mix. But ultimately, it is an empirical question.”

The entire article can be read here.

Posted in Science_Technology | Tagged , , | Comments Off

From Colah’s Blog: Neural Networks, Manifolds, and Topology

Neural Networks, Manifolds, and Topology” is an interesting blog post that explores the links between machine learning, in this case neural networks, and aspects of mathematics. This post builds on Jesse Johnson’s posts about neural networks on his blog, The Shape of Data: Neural Networks 1: The NeuronNeural Networks 2: Evaluation, and Neural Networks 3: Training.

Here are some excerpts from the article.

Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.1

However, there remain a number of concerns about them. One is that it can be quite challenging to understand what a neural network is really doing. If one trains it well, it achieves high quality results, but it is challenging to understand how it is doing so. If the network fails, it is hard to understand what went wrong.

While it is challenging to understand the behavior of deep neural networks in general, it turns out to be much easier to explore low-dimensional deep neural networks – networks that only have a few neurons in each layer. In fact, we can create visualizations to completely understand the behavior and training of such networks. This perspective will allow us to gain deeper intuition about the behavior of neural networks and observe a connection linking neural networks to an area of mathematics called topology.

A number of interesting things follow from this, including fundamental lower-bounds on the complexity of a neural network capable of classifying certain datasets.

The Manifold Hypothesis

Is this relevant to real world data sets, like image data? If you take the manifold hypothesis really seriously, I think it bears consideration.

The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space. There are both theoretical3 and experimental4 reasons to believe this to be true. If you believe this, then the task of a classification algorithm is fundamentally to separate a bunch of tangled manifolds.

K-Nearest Neighbor Layers

I’ve also begun to think that linear separability may be a huge, and possibly unreasonable, amount to demand of a neural network. In some ways, it feels like the natural thing to do would be to use k-nearest neighbors (k-NN). However, k-NN’s success is greatly dependent on the representation it classifies data from, so one needs a good representation before k-NN can work well.

As a first experiment, I trained some MNIST networks (two-layer convolutional nets, no dropout) that achieved ∼1% test error. I then dropped the final softmax layer and used the k-NN algorithm. I was able to consistently achieve a reduction in test error of 0.1-0.2%.

Still, this doesn’t quite feel like the right thing. The network is still trying to do linear classification, but since we use k-NN at test time, it’s able to recover a bit from mistakes it made.

k-NN is differentiable with respect to the representation it’s acting on, because of the 1/distance weighting. As such, we can train a network directly for k-NN classification. This can be thought of as a kind of “nearest neighbor” layer that acts as an alternative to softmax.

We don’t want to feedforward our entire training set for each mini-batch because that would be very computationally expensive. I think a nice approach is to classify each element of the mini-batch based on the classes of other elements of the mini-batch, giving each one a weight of 1/(distance from classification target).9

Sadly, even with sophisticated architecture, using k-NN only gets down to 5-4% test error – and using simpler architectures gets worse results. However, I’ve put very little effort into playing with hyper-parameters.

Still, I really aesthetically like this approach, because it seems like what we’re “asking” the network to do is much more reasonable. We want points of the same manifold to be closer than points of others, as opposed to the manifolds being separable by a hyperplane. This should correspond to inflating the space between manifolds for different categories and contracting the individual manifolds. It feels like simplification.

Topological properties of data, such as links, may make it impossible to linearly separate classes using low-dimensional networks, regardless of depth. Even in cases where it is technically possible, such as spirals, it can be very challenging to do so.

To accurately classify data with neural networks, wide layers are sometimes necessary. Further, traditional neural network layers do not seem to be very good at representing important manipulations of manifolds; even if we were to cleverly set weights by hand, it would be challenging to compactly represent the transformations we want. New layers, specifically motivated by the manifold perspective of machine learning, may be useful supplements.

The entire article can be read here.

Posted in Machine Learning (Narrow Artificial Intelligence), Mathematics | Tagged , , , | Comments Off

The Abenomics Surprises Just Keep Coming … by Pater Tenebrarum

In writing about the woes of Japan, Pater Tenebrarum of Acting Man blog highlights some important economic concepts. Tenebrarum has written a treatise on economics that is scattered throughout his numerous blog posts. Below are some examples from “The Abenomics Surprises Just Keep Coming …“.

On how inflation cannot create real economic growth:

If printing more money and pushing prices higher are what it takes to magically ‘create economic growth’, one must wonder why emperor Diocletian’s coin clipping scheme and John Law’s Mississippi bubble failed. Why hasn’t even a single one inflationary scheme that has been tried in the course of history succeeded?

The answer should be obvious: printing money cannot create real savings or capital. This does not mean that it has no economic effects, and initially, these effects often appear to be positive, as a boom is usually set into motion. Everybody feels good, as asset prices rise and economic activity seems to revive. But the boom is always built on quicksand. It creates no real wealth: scarce capital will be malinvested and ultimately consumed.

On how the Japanese government is using inflation rather than tax increases or outright default to deal with its enormous debt:

Japan is facing a demographic problem. Its population is declining and aging rapidly. More and more people need to rely on their savings to make ends meet. Unemployment meanwhile is very low, as the active labor force is shrinking. It is not immediately obvious which problem Abenomics was supposed to ‘solve’. Only one conclusion makes sense: the government is trying to reduce the burden of its own debt in an attempt to ‘inflate it away’.

This means however that the inflationary push is mainly meant to act like a tax on everyone in Japan. The government might as well have raised taxes outright.

On the absurdity of the fear of deflation exhibited by all schools of economics except the Austrian school:

Economic growth has absolutely nothing to do with rising consumer prices. In fact, mildly declining prices and the associated increase in real incomes are the hallmark of an unhampered market economy using sound money. Even the Federal Reserve was forced to admit in a 2004 study (Atkeson, Andrew and Kehoe, Patrick/Federal Reserve Bank of Minneapolis. Deflation and Depression: Is There an Empirical Link? / h/t Chris Casey ) that ‘no empirical link between deflation and depression could be established.’

Again, no-one should be surprised that in-depth empirical studies actually tend to agree with what should be clear from economic theory anyway, although one must not lose sight of the fact that empirical studies cannot serve to ‘prove’ or ‘disprove’ the correctness of a theory. In this respect, economic theory differs from the natural sciences, which allow for conducting controlled and repeatable experiments. No such experiments can be conducted in economics and every slice of economic history is highly complex and unique and co-determined by a multitude of factors. Even though the laws of economics are always operative, it is not possible to deduce them or prove or disprove them from the study of economic statistics.

From a theoretical point of view it can be shown that real economic growth is indeed possible without expanding the supply of money and that it perfectly agrees with falling prices for consumer goods. Entrepreneurial profits do not depend on the direction of consumer prices, they depend on the price spreads between inputs and outputs. As we have frequently pointed out, if this were not the case, the computer industry – indeed, the entire electronics industry – would have been in a permanent economic depression from the day it was born.

The entire article can be read here.

Posted in Political_Economy | Tagged , , , , | Comments Off

Mass Production of Red Blood Cells from Pluripotent Stem Cells

From “First volunteers to receive blood cultured from stem cells in 2016“:

Red blood cells cultured in a laboratory will be trialled in human volunteers for the first time within the next three years, as part of a long-term research programme funded by the Wellcome Trust.

The consortium will be using pluripotent stem cells, which are able to form any other cell in the body. The team will guide these cells in the lab to multiply and become fresh red blood cells for use in humans, with the hope of making the process scalable for manufacture on a commercial scale. The team hopes to start the first-in-man trial by late 2016.

Blood transfusions play a critical role in current clinical practice, with over 90m red blood cell transfusions taking place each year worldwide. Transfusions are currently made possible by blood donation programmes, but supplies are insufficient in many countries globally. Blood donations also bring a range of challenges with them, including the risk of transmitting infections, the potential for incompatibility with the recipient’s immune system and the possibility of iron overload. The use of cultured red blood cells in transfusions could avoid these risks and provide fresh, younger cells that may have a clinical advantage by surviving longer and performing better.

Professor Marc Turner, Principal Investigator, said: “Producing a cellular therapy which is of the scale, quality and safety required for human clinical trials is a very significant challenge, but if we can achieve success with this first-in-man clinical study it will be an important step forward to enable populations all over the world to benefit from blood transfusions. These developments will also provide information of value to other researchers working on the development of cellular therapies.”

The entire article can be read here.

H/T Fight Aging!

Given all of the unfortunate hype surrounding stem cells in the past, a large scale, commercial application of stem cell technology such as this is encouraging. We must always remember that most research efforts that produce positive results in a lab fail when applied to the real world. This is the nature of research and failure to realize it is the source of hype that all to often plagues science reporting. All reports of laboratory successes should be met with at most cautious optimism.

Posted in Science_Technology | Tagged , | Comments Off

Nanoparticles cause cancer cells to self-destruct

Using magnetically controlled nanoparticles to force tumour cells to ‘self-destruct’ sounds like science fiction, but could be a future part of cancer treatment, according to research from Lund University in Sweden.

“The clever thing about the technique is that we can target selected cells without harming surrounding tissue. There are many ways to kill cells, but this method is contained and remote-controlled”, said Professor Erik Renström.

The point of the new technique is that it is much more targeted than trying to kill cancer cells with techniques such as chemotherapy.

“Chemotherapy can also affect healthy cells in the body, and it therefore has serious side-effects. Radiotherapy can also affect healthy tissue around the tumour.

“Our technique, on the other hand, is able to attack only the tumour cells”, said Enming Zhang, one of the first authors of the study.

In brief, the technique involves getting the nanoparticles into a tumour cell, where they bind to lysosomes, the units in the cell that perform ‘cleaning patrols’. The lysosomes have the ability to break down foreign substances that have entered a cell. They can also break down the entire cell through a process known as ‘controlled cell death’, a type of destruction where damaged cells dissolve themselves.

The researchers have used nanoparticles of iron oxide that have been treated with a special form of magnetism. Once the particles are inside the cancer cells, the cells are exposed to a magnetic field, and the nanoparticles begin to rotate in a way that causes the lysosomes to start destroying the cells.

The research group at Lund University is not the first to try and treat cancer using supermagnetic nanoparticles. However, previous attempts have focused on using the magnetic field to create heat that kills the cancer cells. The problem with this is that the heat can cause inflammation that risks harming surrounding, healthy tissue. The new method, on the other hand, in which the rotation of the magnetic nanoparticles can be controlled, only affects the tumour cells that the nanoparticles have entered.

The rest of the article can be read here.



H/T Fight Aging!

Posted in Science_Technology | Tagged , , | Comments Off

A Weekly Dose of Hazlitt: Ike’s Semi New Deal

Ike’s Semi New Deal” is the title of Henry Hazlitt’s Newsweek column from February 15, 1954. One of the great trends in the US over the past century has been the convergence of political opinion from two parties into a single party with very slight differences. By the time that Eisenhower became president, this trend had reached the point that a republican president did not even contemplate dismantling FDR’s New Deal. Now we have what appears to be a permanent welfare-warfare state, until the money runs out.

When President Eisenhower at his press conference on
Jan. 27 was invited to comment on statements by some
observers that his program was an “extension of the
New Deal,” he is said to have told the reporter “in a
tone of steely indignation” to compare Mr. Truman’s
budget with his own.

But such a comparison is not in itself convincing. It
is true that Mr. Eisenhower’s proposed Federal expenditure
for the fiscal year 1955 is $12.3 billions less than
Mr. Truman recommended for this fiscal year; but it is
not less than Mr. Truman actually spent in 1952, his
last full fiscal year in office; and it is actually $21.6 billions
higher than Mr. Truman spent in the fiscal year
1951. During both those fiscal years, incidentally, the
Korean war was being fought.

Continue reading

Posted in Political_Economy | Tagged , | Comments Off