From the “Preface to the English Edition” of “The Theory of Money and Credit” by Ludwig von Mises: “All proposals that aim to do away with the consequences of perverse economic and financial policy, merely by reforming the monetary and banking system, are fundamentally misconceived. Money is nothing but a medium of exchange and it completely fulfills its function when the exchange of goods and services is carried on more easily with its help than would be possible by means of barter. Attempts to carry out economic reforms from the monetary side can never amount to anything but an artificial stimulation of economic activity by an expansion of the circulation, and this, as must constantly be emphasized, must necessarily lead to crisis and depression. Recurring economic crises are nothing but the consequence of attempts, despite all the teachings of experience and all the warnings of the economists, to stimulate economic activity by means of additional credit.

Mathematicians of the day.

Posted on by Augustus Van Dusen | Leave a comment

The US Should Have 10,000 Members of Congress by Ryan McMaken

McMaken wrote an interesting article in which he examined the number of people per representative in the many countries, in the US states, and how this has changed over time. He also cites research that suggests that decreasing the number of people per representative can act as a brake on government intervention.

Constituency size has been shown to correlate to government spending in many cases, as shown in research by Mark Thornton, George S. Ford, and Marc Ulrich. See here and here

As Thornton et al conclude: 

[T]he evidence is very suggestive that constituency size provides an explanation for much of the trend, or upward drift in government spending, because of the fixed-sized nature of most legislatures. Potentially, constituency size could be adjusted to control the growth of government.

They note the reason for this appears to be diminished engagement between elected officials and constituents when constituency sizes are very large. 

Also key in this analysis is the problem of “asymmetry” of interests between ordinary voters and minority interest groups. For example, a group of farmers interested in maintaining a subsidy will go to a lot of effort in order to lobby elected officials. This will be true even if the opportunity cost of access to elected officials is very high — the potential gains for lobbying success are very high. For voters who don’t want to pay for the farmers’ subsidies, however, the calculus is very different. Each farmer might receive thousands of dollars in subsidies. But each taxpayer only pays a small fraction of that to keep the subsidies flowing. A taxpayer may wish to lobby his elected official against the subsidy, but if meeting with an elected official is costly in terms of time and money, the taxpayer will quickly find it’s not worth the trouble. Thus, lowered access to elected officials is more likely to prevent lobbying from individual voters than from established and well-funded interests. 

Other factors mentioned by Thornton, et al and others include:

  • Large constituencies increase the cost of running campaigns, and thus require greater reliance on large wealth interests for media buys and access to mass media. The cost of running a statewide campaign in California, for example, is considerably larger than the cost of running a statewide campaign in Vermont. Constituencies spread across several media markets are especially costly. 
  • Elected officials, unable to engage a sizable portion of their constituencies rely on large interest groups claiming to be representative of constituents. 
  • Voters disengage because they realize their vote is worth less in larger constituent groups. 
  • Voters disengage because they are not able to meet the candidate personally. 
  • Voters disengage because elections in larger constituencies are less likely to focus on issues that are of personal, local interest to many of the voters. 
  • The ability to schedule a personal meeting with an elected official is far more difficult in a large constituency than a small one. 
  • Elected officials recognize that an single voter is of minimal importance in a large constituency, so candidates prefer to rely on mass media rather than personal  interaction with voters. 
  • Larger constituent groups are more religiously, ethnically, culturally, ideologically, and economically diverse. This means elected officials from that constituent group are less likely to share social class, ethnic group, and other characteristics with a sizable number of their constituents. 
  • Larger constituencies often mean the candidate is more physically remote, even when the candidate is at “home” and not at a distant parliament or congress. This further reduces access. 

On other words, there is compelling reason to believe that smaller is better.

The Anti-Federalists Wanted Smaller Constituent Sizes 

For a look at what was originally envisioned for constituency size in the United States, we can consult the Constitution itself, which mandates a floor of at least 30,000 constituents per congressional district.  Noticing that the Constitution did not actually mandate increases in the size of the congress to match population increases, the anti-federalists had attempted to amend the constitution to ensure guarantees (see “article the first“). The anti-federalist Cato, in his Letter No. 5 noted “that the number of representatives are two few” among several “evils that will attend the adoption” of the new constitution. As usual, the opponents of the 1787 Constitution have been proven correct. 

Nevertheless,  in 1790, members of congress represented on average 37,000 people. In Massachusetts in 1800, for example, there were 422,000 people sharing 16 members of Congress. Senators were not directly elected at the time, however, so if we  include only members of the House, the average constituency size in Massachusetts in 1800 was 30,142. To reach a constituent size like this today, the US Congress would require 10,000 members. If this strikes us as “impractical” then it’s likely that the United States is far too large to offer anything that we might seriously call “representative government.” It is far more likely that the notion of a single nation-state with 318,000,000 people is what is impractical.

Constituent Size Is Just One Strategy 

As noted by Thornton, et al, reducing constituent size could be a helpful strategy in reducing government spending, but my purposes here are not to suggest that democracies composed of small constituencies are some type of ideal form of government. Nor do I claim that the Us Constitution should be regarded as an unassailable authority on these matters. Constituent size by itself cannot be used to ensure that human rights are protected or that peaceful people be left alone. Certainly, ideology is a major independent factor in limiting state power: if people want a large intrusive government, they’re going to get it regardless of constituent size. And, as Nathan Benefield of the Commonwealth Institute has noted, other important factors include the size of legislative staffs, the degree to which legislators are”professional legislators” rather than “citizen legislators,” and the size of legislative salaries. 

The change over time in constituent sizes does call into question how modern defenders of the US political system can so blithely insist that it is illegitimate to oppose the federal government in the form of nullification or secession. “You have representatives in Congress!” is the common refrain that is supposed to silence dissent. But, as we have seen “representation” today looks nothing at all like it did even in the early 20th century. Legislators are increasingly inaccessible, physically distant, wealthy, and expensive to influence. In many cases, they spend virtually the entire year thousands of miles from the people they are supposedly representing. 

If this is “representation,” our definition of the word has become thoroughly flawed.

The entire article can be read here.

Posted in Political_Economy | Tagged | Comments Off on The US Should Have 10,000 Members of Congress by Ryan McMaken

From Penn State News: 3D printing produces cartilage from strands of bioink

Strands of cow cartilage substitute for ink in a 3D bioprinting process that may one day create cartilage patches for worn out joints, according to a team of engineers.

“Our goal is to create tissue that can be used to replace large amounts of worn out tissue or design patches,” said Ibrahim T. Ozbolat, associate professor of engineering science and mechanics. “Those who have osteoarthritis in their joints suffer a lot. We need a new alternative treatment for this.”

Cartilage is a good tissue to target for scale-up bioprinting because it is made up of only one cell type and has no blood vessels within the tissue. It is also a tissue that cannot repair itself. Once cartilage is damaged, it remains damaged.

Previous attempts at growing cartilage began with cells embedded in a hydrogel — a substance composed of polymer chains and about 90 percent water — that is used as a scaffold to grow the tissue.

“Hydrogels don’t allow cells to grow as normal,” said Ozbolat, who is also a member of the Penn State Huck Institutes of the Life Sciences. “The hydrogel confines the cells and doesn’t allow them to communicate as they do in native tissues.”

This leads to tissues that do not have sufficient mechanical integrity. Degradation of the hydrogel also can produce toxic compounds that are detrimental to cell growth.

Ozbolat and his research team developed a method to produce larger scale tissues without using a scaffold. They create a tiny — from 3 to 5 one hundredths of an inch in diameter — tube made of alginate, an algae extract. They inject cartilage cells into the tube and allow them to grow for about a week and adhere to each other. Because cells do not stick to alginate, they can remove the tube and are left with a strand of cartilage. The researchers reported their results in the current issue of Scientific Reports.

The cartilage strand substitutes for ink in the 3D printing process. Using a specially designed prototype nozzle that can hold and feed the cartilage strand, the 3D printer lays down rows of cartilage strands in any pattern the researchers choose. After about half an hour, the cartilage patch self-adheres enough to move to a petri dish. The researchers put the patch in nutrient media to allow it to further integrate into a single piece of tissue. Eventually the strands fully attach and fuse together.

The rest of the article can be read here.

While this is early stage research and thus enthusiasm should be muted, it does seem to be a promising approach. Also, professional athletes would be a prime market of wealthy early adopters of such technology due to their chronic knee problems. Furthermore, an aging population means more people with knee cartilage damage, yet another large market. If this technology makes it through human trials, such markets could assist in advancing the technology quickly and driving down costs.

H/T Fight Aging!

Posted in Science_Technology | Tagged , , | Comments Off on From Penn State News: 3D printing produces cartilage from strands of bioink

From The Tenth Amendment Center: Signed into Law: New Hampshire Bill Expands State’s Medical Marijuana Program

Earlier this month, New Hampshire Gov. Maggie Hassan signed a bill expanding the state’s medical marijuana law, further nullifying federal prohibition in practice.

Rep. David Luneau (I) introduced House Bill 1453 (HB1453) earlier this year. The legislation adds ulcerative colitis to the list of debilitating medical conditions that qualify a patient to access medicinal cannabis under the state’s medical marijuana law.

The New Hampshire House and Senate both passed HB1453 on a voice vote. With Hassan’s signature, this expansion of the state’s medical marijuana law will go into effect on Aug. 2.

This is the second time the New Hampshire legislature has expanded the conditions eligible for treatment with medicinal cannabis.The state legalized marijuana for medical use in 2013. The first dispensaries opened in the state earlier this year. Allowing patients suffering from ulcerative colitis to access medical marijuana will further the medicinal cannabis program in New Hampshire.

EFFECT ON FEDERAL PROHIBITION

New Hampshire’s medical marijuana program removes one layer of laws prohibiting the possession and use of marijuana, but federal prohibition remains in place.

Of course, the federal government lacks any constitutional authority to ban or regulate marijuana within the borders of a state, despite the opinion of the politically connected lawyers on the Supreme Court. If you doubt this, ask yourself why it took a constitutional amendment to institute federal alcohol prohibition.

While New Hampshire law does not alter federal law, it takes a step toward nullifying in effect the federal ban. FBI statistics show that law enforcement makes approximately 99 of 100 marijuana arrests under state, not federal law. By easing state prohibition, New Hampshire essentially sweeps away part of the basis for 99 percent of marijuana arrests.

Furthermore, figures indicate it would take 40 percent of the DEA’s yearly-budget just to investigate and raid all of the dispensaries in Los Angeles – a single city in a single state. That doesn’t include the cost of prosecution. The lesson? The feds lack the resources to enforce marijuana prohibition without state assistance.

The rest of the article can be read here.

Posted in Political_Economy | Tagged , , | Comments Off on From The Tenth Amendment Center: Signed into Law: New Hampshire Bill Expands State’s Medical Marijuana Program

From The Tenth Amendment Center: New Pennsylvania Law Sets Foundation to Reject EPA “Clean Power Plan”

Gov. Tom Wolf has signed a bill that expands and clarifies a state law that takes a small but important first step in setting the foundation to reject federal some EPA rules and regulations in practice.

Sen. Donald White (R) introduced Senate Bill 1195 (SB1195) earlier this year. The legislation amended and clarified a law passed in 2014 that requires the state legislature to give approval before the state submits any plan to comply with EPA “Clean Power Plan” emission requirements.

Under Pennsylvania law, the state Department of Environmental Protection must submit any plan to comply with EPA Clean Power Plan emmission requirements to the legislature at least 100 calendar days before sending it to the EPA. If either legislative chamber rejects the state plan, the DEP must address the reasons for the rejection and modify it, or draft a new plan. New provisions just signed into law by the governor require a 180-day public comment period with at least four public hearings before the DEP can submit a new plan after legislation disapproval.

The amended version of the law also prohibits the DEP from submitting any plan until the expiration of the stay on the Clean Power Plan ordered by the Supreme Court.

The Senate approved the new provisions 41-9. The House passed the measure 171-18. With Gov. Wolf’s signature, the changes to the law went into immediate effect.

The Pennsylvania aw does not guarantee the state will ultimately reject compliance with these EPA mandates, but it is does set the foundation to do so. The legislature now has the power to put an indefinite halt in compliance with the Clean Power Plan. It also brings the entire process into the public spotlight, allowing Pennsylvania residents to have input into it.

The rest of the article can be read here.

Posted in Political_Economy | Tagged , | Comments Off on From The Tenth Amendment Center: New Pennsylvania Law Sets Foundation to Reject EPA “Clean Power Plan”

The Fed and Bernanke Are Wrong About the Natural Interest Rate by Joseph T. Salerno

In this outstanding article, Salerno explains the formulation of the concept of the natural rate of interest by Wicksell and how it has been abused and misused by the fed.

A few days before the last FOMC meeting The Wall Street Journal  reported on the Fed’s hand wringing over its inability to identify the “natural rate of interest” and explain its recent movements. According to the report, the Fed uses the “mysterious natural rate” to guide its decisions in setting the target for the fed funds rate. Modern macroeconomics defines the natural rate of interest as the (real) rate of interest that maintains the economy in a Keynesian state of bliss, with stable prices (or moderate inflation) and actual real GDP equal to “potential” or full-employment GDP.

According to Fed economists, the natural rate is “unobserved” and therefore “has to be inferred from observable data” using econometric models or other statistical techniques. But different models yield different estimates of the natural rate. These estimates range from “persistently negative since 2008” to a “reasonable range” between 1% and 2%. Nor are these estimates very precise. One model estimates a natural rate of 0.5% from 2010–2015 with a 90% confidence interval of 4 percentage points, meaning that there is a 90 percent probability that the natural interest rate lies somewhere between -1.5% and 2.5%. Another model yields an estimate of the natural rate for the year 2000 that includes both 0% and 6% within the 90% confidence interval! But the main problem befuddling monetary policymakers is that almost all estimates have indicated that the natural rate has fallen precipitously from 2 to 2.5% leading up to the financial crisis to near or even slightly below zero and has remained there since 2009. Further deepening the mystery is that the rate shows no signs of recovering despite the fact that the real economy has considerably strengthened since 2009.

So for Keynes and his contemporary disciples the natural or neutral rate of interest is determined wholly in financial markets and is one of the main determinants of the level of investment spending and the real rate of return on investment.

This conception is the polar opposite of the natural rate of interest as conceived by Wicksell. According to Wicksell (p. 205) who was a follower of Böhm-Bawerk and an Austrian capital theorist through and through, “the natural rate of interest [is] the real yield of capital in production.” The natural rate is thus an “intertemporal” price, or the ratio of prices between present consumption and future consumption (as embodied in capital goods), and it is wholly and directly determined by capital investment in the real sector of the economy. The loan rate of interest is therefore a mere shadow of the natural rate. As Wicksell (p. 192) put it: “That loan rate that is a direct expression of the real rate, we call the normal rate.” This “normal” or “natural” loan rate derives from the natural rate of return on investment throughout the economy’s capital structure and moves in near lock-step with it: “The rate of interest at which the demand for loan capital and the supply of savings exactly agree … more or less corresponds to the expected yield on the newly created capital.” 

For Wicksell, then, in sharp contrast to Keynes, the natural rate is a real price spontaneously determined by market forces, to which the loan rate normally and automatically tends to adjust. In his view, one of the main reasons why the two rates might substantially diverge from one another is because fractional-reserve banks have the power to expand credit by lending deposited gold or, more likely, creating and lending out bank notes and what he called “fictitious deposits.” The expansion of credit by the banks drives down the loan rate below its “normal” equivalence to the natural rate. The divergence between the two rates induces entrepreneurs to eagerly borrow the additional funds at the loan rate and invest them in production processes yielding the higher natural rate of return. The result is an increase in the demand for labor, raw materials, commodities and machinery and a bidding up of wages rates and other factor prices and rents. The additional money payments to laborers and other resource owners eventually cause a rise in the demand and prices of consumer goods. The result is what Wicksell called a “cumulative process” of general price increases that lasts as long as banks suppress the loan rate below the natural rate. The inflationary cumulative process comes to an end only if and when credit expansion ceases and market forces are permitted to establish equality between the loan and natural rates. Theoretically, the gap between the two rates could cause an “unlimited” rise in prices. Wicksell also maintained that a cumulative process of inflation or deflation could be precipitated by a failure of fractional-reserve banks to adapt the loan rate quickly enough to an initial rise or fall in the natural rate of interest by contracting or expanding credit.

It is worth noting that Ludwig von Mises, in The Theory of Money and Credit (pp. 349–66) originally published in German in 1912, developed the Austrian theory of the business cycle on the foundation of Wicksell’s crucial analysis of the distinction between the loan rate and the natural of interest. Unwilling or unable to comprehend this distinction, Keynes in the General Theory (pp. 192–93) charged Mises along with Friedrich Hayek and Lionel Robbins with “confusing the marginal efficiency of capital with the rate of interest.” Keynes’s “marginal efficiency of capital” was his peculiar term for the expected rate of return on investment — which is nothing other than Wicksell’s natural rate.

The entire article can be read here.

Posted in Political_Economy | Tagged , , | Comments Off on The Fed and Bernanke Are Wrong About the Natural Interest Rate by Joseph T. Salerno

A Weekly Dose of Hazlitt: The Interest Ceiling

The Interest Ceiling” is the title of Henry Hazlitt’s Newsweek column from June 29, 1959. While some of the specific shenanigans of the fed today are new, the goal of suppressing interest rates is not. Again, this column could have been written today with only a few modest alterations.

The President, the Secretary of the Treasury, and the
chairman of the Federal Reserve Board are all urging
Congress to remove the present legislative ceiling of
4. percent on the interest rate that the government
can pay on its new issues of bonds with a maturity of
five years or more. The reform is urgent. Some longterm
Treasury bonds have already been selling in the
open market at prices that yield about 4. percent. The
government cannot sell new long-term bonds at yields
below going market rates. The only effect of the present
limit is to force the government to finance its needs
through short-term borrowing. But this merely drives
up rates (on which Congress has been wise enough not
to put any ceiling) on short-term borrowing, and forces
the government to keep coming back to an uncertain
market every few months.

Some of the Democrats in Congress have been cool
to the suggestion that the rate ceiling on long-term
bonds be removed. They are full of counter-proposals,
typical of which is that the Federal Reserve System support
or buy in long-term government bonds at prices
that would keep their yields 4. percent or below. Such
proposals would not only destroy once more the hard
won independence of the Federal Reserve Board, but
they would be violently inflationary.

How Inflation Comes

The Federal Reserve System was forced to peg the government
bond market through the second world war
and until early in 1951. One result was a huge inflation.
As Secretary Anderson explained the process anew in
his recent testimony: The Reserve banks would buy
Treasury securities, paying for them by creating deposits
in the Treasury’s name. As the Treasury paid out
this money to individuals, the Treasury checks would
be deposited in individual banks, thus adding to those
banks’ reserves because such checks are the equivalent
of cash. This increase in the banks’ reserves would provide
for a multiple addition to the banks’ lending and
investing power. Direct sale of Treasury issues to the
Federal Reserve, in short, would “provide the basis for
a highly inflationary expansion of the money supply.”

The purchase of government securities by the
Federal Reserve System is inflationary even when it
buys short-term securities. But the situation would be
much worse if it supported long-term securities also.
Federal Reserve economists have pointed out that when
the system buys, say, three-month bills, longer maturities
are also affected in at least some degree by substitution
or arbitrage transactions. In any case, increased
bank reserves, which increase by a multiple factor the
supply of funds available for loans and investments,
are provided just as effectively by operations in bills
as by operations in bonds. And there is a further consideration.
The purchase of long-term bonds might
have to be endless and astronomical to hold down the
long-term interest rate. Such bond purchase, therefore,
would ultimately be enormously more inflationary than
bill purchase.

A False ‘Saving’

Some congressmen honestly think they are saving the
taxpayers’ money by forbidding higher interest payments
on government bonds. But the inflation they
would force through Federal Reserve buying to keep
down the bond yields means, as the President has put
it, that “the additional cost to the government alone for
increased prices of the goods and services it must buy
might far exceed any interest saving.”

The irony is that the very congressmen who are now
complaining about higher interest costs for the government
are among those who have done most to bring
them about. By insisting on artificially cheap money in
the past, they increased the present extent of inflation.
Part of the interest rate that the government must now
pay for long-term borrowing is in effect an insurance
premium that lenders are asking as a hedge against further
depreciation of the dollar.

The chief contribution that Congress can now
make is to balance the budget, remove fears of further
inflation, stop agitating for cheap money, and let the
Treasury meet whatever competitive rate is necessary
to sell its bonds.

Posted in Political_Economy | Tagged , | Comments Off on A Weekly Dose of Hazlitt: The Interest Ceiling

Spark Therapeutics Announces Updated Data from First Cohort in Hemophilia B Phase 1/2 Trial Demonstrating Consistent, Sustained Therapeutic Levels of Factor IX Activity

Hemophilia B Data Overview

SPK-9001, a novel bio-engineered adeno-associated virus (AAV) capsid expressing a codon-optimized, high-activity human factor IX variant, was developed using Spark’s proprietary technology platform for selecting, designing, manufacturing and formulating highly optimized gene therapies.  SPK-9001 is being developed in collaboration with Pfizer Inc. (NYSE:PFE).  Data from the Phase 1/2 clinical trial of SPK-9001 were presented on June 11 at the 21st Congress European Hematology Association (EHA) by Dr. Spencer Sullivan, an Assistant Professor of Pediatrics and Medicine at the University of Mississippi Medical Center and a trial investigator, and were an extension of the data released on May 19th in the initial EHA abstract.

Data presented today show that the low dose cohort of four subjects enrolled in the study experienced consistent and sustained factor IX activity levels following a single administration of SPK-9001 at the initial dose level (5 x 1011 vg/kg) studied in the trial. Factor IX activity in the first two subjects, which had no history of liver disease, rose consistently and have stabilized at 28% of normal through the first approximately twenty-six weeks post-administration in the first subject, and at 41% of normal at fifteen weeks post-administration in the second. Factor IX activity level in the third subject, with a history of liver disease, also rose consistently and was at 26% of normal at approximately ten weeks post-administration. The fourth subject, also with a history of liver disease and not included in the previously disclosed data, saw a clinical response similar to the earlier subjects and was at 33% of normal through approximately seven weeks.

To date, over a combined 58 weeks of observation, none of the first four subjects received regular infusions of factor IX concentrates to prevent bleeding events. One precautionary infusion took place due to a suspected ankle bleed in subject number three two days after administration.  Based on their pre-enrollment histories, it is estimated that the four subjects followed to date would have received more than 100 infusions of recombinant factor IX over the period of the study to prevent or treat bleeds as part of their normal care.

Across the cohort, we saw no sustained elevation in liver enzyme levels and no drop in factor IX levels.  To date, SPK-9001 has been well-tolerated and no subjects have needed, or received, immunosuppression.

About Hemophilia A and Hemophilia B

Hemophilia is a rare genetic bleeding disorder that causes the blood to take a long time to clot as a result of a deficiency in one of several blood clotting factors, and occurs almost exclusively in males. People with hemophilia face specific risks as they are not able to form blood clots efficiently and are at risk for excessive and recurrent bleeding from modest injuries, which have the potential to be life threatening.  People with severe hemophilia often bleed spontaneously into their muscles or joints. Hemophilia A is more common than hemophilia B. The incidence of hemophilia A is one in 5,000 male births. People with hemophilia A have a deficiency in clotting factor VIII, a specific protein in the blood. Hemophilia A is also called congenital factor VIII deficiency or classic hemophilia. Current standard of care requires recurrent intravenous infusions of either plasma-derived or recombinant factor VIII to control and prevent bleeding episodes.   The incidence of hemophilia B is one in 25,000 male births.  People with hemophilia B have a deficiency in clotting factor IX, a specific protein in the blood. Hemophilia B is also called congenital factor IX deficiency or Christmas disease. Current standard of care requires recurrent intravenous infusions of either plasma-derived or recombinant factor IX to control and prevent bleeding episodes. There exists a significant need for novel therapeutics to treat people living with hemophilia.

The entire article can be read here.

Posted in Science_Technology | Tagged , , | Comments Off on Spark Therapeutics Announces Updated Data from First Cohort in Hemophilia B Phase 1/2 Trial Demonstrating Consistent, Sustained Therapeutic Levels of Factor IX Activity