Uncategorized

The impact of rates on banks

This post is from a mobile phone while I am getting ready to sleep, so may not make much sense. With this warning I shall dive right to my topic

Indian banks tend to have their stock price go up when the central bank lowers rates. For the longest time, I had never questioned this ‘fact’ and never stopped to wonder why.

When I did, at first the answer seemed obvious. Of course Indian banks prices would go up. After all the cost of funds is driven by the risk free rate and when the RBI drops it, the cost incurred by the banks comes down. What’s to question?

Then I thought about it some more. And this time, I put on my central bank hat. RBI (or any central bank) does not drop rates in order to help banks make more money. They do so in order to have more people take loans to grow the economy. The thing that banks should therefore do is to lower the rate at which they lend money and thereby keep their margins stable.

However, banks also have costs other than interest. They have to pay their CEO’s and CFO’s; their risk managers and their Vice Presidents who are responsible for sales. They need to pay for branch leases and ATM machines. Operational costs that are not interest. So they should actually make less money when rates go down.

Luckily there exists another driver. When rates go down more people start taking loans. This means that increased loan volumes make our CEO’s and VP’s work harder and therefore they get more productive. This leads to more profits and happy shareholders.

So the Indian example is now explained. What about places like the US or theEurozone? There the cost of money is very close to zero and any lowering of interest rates may not be of use in increasing lending.

In these cases, I suspect stock prices of these banks will go up when interest rates rise. We can wait and watch as the Fed is likely to raise rates over the next year and see if what I said turns out to be true.

Some bonus points for any readers who can identify the assumptions behind this post.

Economics

Derived Value — An alternative story of money

Introduction

I was advised by A, the software coding capitalist socialist to pen my last conversation with him into a blog post failing which he promised to give me a kick in various sensitive parts of my rather large economy size posterior. This is of course, a fate best avoided, so I will get cracking!

The post describes one of the ways I think about economic markets. It should not be taken too literally, and a careful and pedantic reader will find several holes in the story that I am going to tell, but this will serve as a very basic guide to the way our market based system has evolved.

The Beginning — The barter system

The barter system was the original way men could trade. one item of value was traded for another item (or service) for equivalent value.

However, it faced a whole bunch of problems, largely because barter depended on the co-incidental needs of two counterparts who just happened to need what the other person had. Such co-incidence may make for good stories, but very rarely does it make a good marketplace.

Thus was born money, which became a proxy for value

The first derivative — money

There is the standard definition of money. If you want that you can read an economics or commerce textbook. For this post though, lets think of money as a proxy for value. Money’s own value is only in the fact that other people exchange it for items of value. So we can think of money as a first derivative of value.

Any derivative of a product imposes costs. So trading with money imposed costs like keeping track of money, worrying about money losing its value, people stealing your money, or a hundred other things. But the benefits of money far outweighed the small costs of adopting it, and thus was born the story of money

For many thousands of years, that was enough. Money, in some form or another was good enough to make market places work, and kingdoms and empires run. Then came the spice rush!

The second derivative of value — the Joint stock company

When people wanted to sail thousands of miles in order to get spices from the East Indies (actually Sumatra, Java and Indonesia), they needed to get enough money to build fleets of ships, hire hundreds of sailors, buy thousands of beads to sell to the natives, and muskets and swords to take from the natives what they could not buy, and…well you get the point. The people who wanted to buy spices needed a lot of money. It was so much money that even rich merchants did not have enough.

Thus was born the first stock issuance, where people raised money by selling shares in a shipping enterprise. One of the first of these entities was the Dutch East India Company.

The purchasers of stock in a company got the rights to receive a portion of the money that was generated by the sale of spice. So the value that was generated was the spice, which in turn yielded a derivative of value, which was money, and the buyers and sellers of the stock of the Dutch East India Company could said to be trading in the first derivative of money, or the second derivative of value.

Now this imposed more costs. Shares and stocks of a company needed to be tracked. Legal systems needed to be made to account for this new thing called a company. Books of accounts needed to be maintained. Armies of clerks were suddenly needed to write these accounts.

But the benefits of pooling capital were undeniable….and the second derivative of value was here to stay

This was good enough for the capital market-place for the next 300 years or so. Then sometime in the 1950’s and 60’s a bunch of people started wondering if the stock of companies being exchanged really correctly reflected the “value” of a company.

And they noticed something odd. The more stock of a company changed hands everyday, the better its “value” seemed to be. This seemed to be because a larger number of buyers and sellers seemed to somehow correlate to more information about a company feeding into the market place. The reasons are too complex for this post and will need a separate set of posts altogether, but suffice to say, greater liquidity seemed to encourage better pricing.

The Third derivative of value —- Future and Options

So someone came up with an idea. How about we create a product called a derivative, which will be separate from the stock of a company, but will be based on the “Value” of the stock of the company. A whole gaggle of products, futures, options, calls and puts, and Interest Rate Swaps suddenly came into existence, and the world of markets was changed yet again.

The futures and options market imposed still more costs on the now vast financial market-place. A whole new breed of speculators came into being. A brand new vocabulary needed to be invented. Whole hordes of lawyers were sacrificed in the drafting of the first derivative contracts, before an industry was formed that specialised in trading these new and exotic creatures. Words like all-inclusive finance cost and cost of hedging came to bedevil CFO’s of otherwise normal companies.

In fact the derivatives markets in most stock markets now are 5-10 times larger than the underlying stock markets.

But even so, the benefits still seem to overcome these costs. Stocks can be traded quicker, pricing is quicker, and the wheels of finance needed to be greased generously with the lubricant of derivatives for them to function correctly.

The beginning of the next wave — Credit Risk

Now the tale starts getting a bit murkier. A mere dozen years after the first derivative contract was born, derivatives of derivatives start getting traded. However, they still track (in some form) the underlying value that companies generate, and therefore still were related to the core of the barter system, which was the exchange of value.

Now, some extremely intelligent people asked the next question….is there some way to get rid of the risk that is generated by the process of generating value?

Now a step back to explain. Value is generated by people making goods and services that they hope other people will have a use for. But there is always some risk that is generated when people engage in this process. Maybe unseasonal rains will harm a wheat farmers harvest. Or a fire at a factory could ruin a company that produced widgets.

With so many people now trading so many products that only derived value from the underlying company people started to worry that perhaps the risks involved in running a company were not being appropriately priced. Thus was born the concept of the Credit Default Swap.

The wheels start coming off the wagon — the fourth derivative of value

The idea of Credit Default Swap (CDS) is deceptively simple. In case someone fails to make a payment of debt, the Swap will act as insurance for the holder of that debt. The buyer of a CDS buys insurance, and the seller of the CDS takes on the credit risk.

The idea now is that the seller of CDS now takes the final step of evaluating the credit risk that may face an entity, and by pricing the CDS appropriately, the cost of money (first derivative of value)  for a company (Second derivative of value) would drop, which in turn would make the product more fairly priced (generating more value).

Everyone wins! The CDS will never need to lose money because the insurance that is sold is always sold only after the credit risk is clearly mapped out, and the company gets to lower its funding cost because of increased certainty. What could go wrong?

As it turns out….lots can go wrong now. The additional complexity now involved is now so high that it is extremely difficult for the system as a whole to now generate more value than the added complexity. Sure, and individual CDS seller may be very good at pricing risk, and therefore could probably make money. But the various components that make up credit risk are too many to know with certainty, and now the structure starts to wobble.

Collapse and the destruction of Value 

Any mistake in pricing risk will either increase funding costs of a company (thereby making its product unviable) or will hide the risks involved in the company (which will lead to a sudden catastrophic failure at some point in the future). Both of these things will tend to destroy an otherwise value generating entity.

More seriously, because of the large costs imposed in the entire value creating process, the failure will tend to affect not only the producers and customers of the goods being produced, but investors, traders and buyers of every derivative product the entity has linked to.

This is bad enough if the company is a large conglomerate like a GE. But what if the entity that is being affected is an entity that actually creates money; ie: the banks who create money by lending money and accepting deposits (another extremely long blog post required to explain the mechanism).

When banks get the price of their risk wrong, the chain of events that occurs can appear to be a reinforcing feedback loop which will eventually lead to the entire system of money that the bank depends on to shut down (at least temporarily)

Conclusion

This view of the market-place is one that I find useful to try to understand how individual items in financial markets move. By no means is this the only way to try to understand financial markets. In fact, its very likely this is not even the correct way. After all, CDS being a fourth derivative of value sounds like quite a stretch!

But the underlying point that each strand in the web of financial complexity adds costs, while delivering some benefits. it is when we weave too many strands that the costs can bring down the economic system.

Does this mean that complex financial products should be prohibited? I honestly do not know. My own thoughts are that rule based systems tend to be too inflexible to work in a changing world. But I have been wrong before, and could be wrong again.

Uncategorized

The Internet

This is a short one. Over the last few days, I have started using my internet connection to do more than watch rhymes on Youtube.

I have been downloading several Gb of games from Steam, and while doing so, I could not help but compare the world today with the world of 1999.

15 years ago, I was younger, and took 3 months to download a 75 Mb game demo.

One reason was my dialup connection that gave me 33.6 kbps (that is kilobits per second) at best, and was usually about 16-17kbps.

The second was the fact that we were billed for the internet both for the hours we put in, as well as for the phone call it took. And my dad would have several things to say if he saw a 70% rise in the phone bill!

Today, I downloaded a 10Gb game in about 45 minutes. While updating my drivers and browsing various news sites. That is quite a change from the world of 15 years ago.

its worth thinking about!

Economics, Freakonomics, Management

Economic Theories — The Tripod View of markets

Introduction

It has been a while since I last posted, and since then I have shifted cities, companies and the type of work that I do. And yes, have married and become a father as well. And most of those things have ensured that writing posts has become fairly rare. This week, I have some time and wanted to restart with an idea that has been floating around in my head for some time — The relationship between customers, employees and shareholders and how we can use this to analyse an industry.

The Components of an Economy

The standard economics books I have read say that a marketplace is made up of buyers and sellers, all seeking to maximise their overall value. But if we look at the world around us, the “market” does not seem to have much negotiating of overall value. Sure, the famous “invisible hand” is supposed to be working to ensure this, but it seems fairly invisible to me!

Instead, I have been trying to see economic principles with respect to three components of the economy that I see in daily life whenever we buy (or sell) something.

One, the person who buys the goods, who is called the buyer, customer, client, sucker….depending on the industry you are in.

Second, the shareholder of the company, or the proprietor of the firm who is selling you the stuff that you are buying

Thirdly, the employee who actually delivers the stuff that you buy. In some cases, like my local barbershop, the employee may be the proprietor. In other cases, like my bank, there may be many employees who deliver the product to me.

Most economic textbooks take the employee as one of the costs of production, and then say “TaDa….standard buyer seller model!”, and they are probably right. However, it may be interesting to take all three components and work to see what sort of dynamic evolves when you have a perfect market.

The Example — Telecom in India

The context and industry overview

One of the more interesting examples of this interaction was seen in the Telecom Industry between 200-12. For various reasons, which need their own blog post to discuss, there were a very large set of telecom service providers in India in this period. While 10-15 service providers does not a perfect market make, in the oligopoly that is telecom, this is not at all common. As a comparison, most developed markets have 2-3 major players in mobile telephony and maybe 1 or 2 minor competitors. So 15 service providers is quite a large number in the relatively low margin telecom world in India.

As most economic theories would tell you, when you have a larger number of sellers, this would ten to drive down prices and make companies try to grab customers any way they can, and that is exactly what happened. Indian mobile phone tariffs plummeted, with already low call charges dropping to the lowest in the world. 

Analysis — A free for all, and what ensues.

One of the things that I expected was that telecom service providers would start differentiating themselves in this crowded market with solutions that were tailored to individual market segments. This would mean that one of the service providers would capture the corporate data market, while another would be targeting the low margin, but high volume retail voice market. In other words the world of Telecom would start to resemble the world of soap sellers, with one group saying, 20% moisturizing milk and the other group saying, “the soap of the stars”.

But that did not happen….at least from my perspective. Instead, tariff plans across the board seemed to converge in a pure price battle, as if all the businesses had no other alternative.

So you had this weird situation, where there were 15 mobile service providers but none of them were any good. Customer service was a joke, and any kind of customization that was requested was inevitably met with “Our Policies/systems do not allow this” My first instinct was to say that these companies are stupid. Why are they not investing in building a differentiated model? It would make so much sense!

But these companies are being run by smart and knowledgeable people who know way more about the business than me, As an example, one of my classmates at my Business School (who is definitely at least as smart as me, and is more hardworking too!) is working for one of those telecom companies. And as top management, I was sure that Sunil Mittal, and Arun Sarin (at the time) were no mugs either. So why could they not figure it out?

So, i changed my assumptions. What if they had figured it out, but something was stopping them? It could not be management quality, so what could it be. The other two alternatives are people, and capital. And now things became a lot more understandable!

Results — The unfortunate trio

First, some context. Any company needs to return cash to its investors (and certainly its lenders). There needs to be some amount of revenue that HAS to be set aside to ensure that capital providers are remunerated. And in telecom, the major expenses are spectrum fees (one time, usually financed with debt) and employee costs. I am deliberately excluding network costs and tower costs, which are not insignificant, but which muddy up the waters in this analysis.

So once you pay the debt holders (which should be non-negotiable), you have to distribute profits among the shareholders and employees. However, the hyper-competitive nature of the industry meant that nobody was making a lot of money. So companies could not pay their shareholders very much….and even though there was a lot of demand for telecom employees because of the number of companies, the sheer cash crunch at companies meant that employees were not getting paid that much! So employees at these companies were not happy at all. Wage hikes were below their expectations, and work hours seem to stretch longer and longer!

“What about shareholders?”, you may ask. Well, shareholders were suffering too. From what I could see, Vodafone India was not making enough money to break even, so Vodafone Plc needed to invest in India. Bharti Airtel, decided to try to diversify its holdings by moving out of India, which needed a lot of debt, which in turn has depressed their earnings per share. Reliance Communications tried playing the volume game, and for some time was the largest telecom company in India by subscribers (maybe it still is, but I don’t think so). But because its subscribers did not really make it much money, it also had to take on a huge pile of debt. Tata Docomo, which started the fare wars in India with its famous 1p per minute tariffs is yet to make a profit (I think). 

Now customers should have had a great deal! But that does not seem to be true either. Sure, the number of subscribers shot up dramatically, but I have yet to meet a satisfied telecom subscriber! Service quality across the industry has suffered, and network congestion means that signal drops are (relatively) common across all service providers.

Conclusion

None of the three stakeholders in the industry are entirely satisfied with the status quo. As shareholders go, three of the smaller competitors seem to have disappeared (with some help from the courts). With the reduction in competition, we are now seeing a more traditional competitive landscape (though how long it lasts is a question mark). Companies are reducing debt and raising equity in an effort to ensure that risks in their capital structure are reduced.

At the same time, customer tariffs are beginning to rise, though tariffs in India are still among the lowest in the world. But as a result of the risks taken in the 2007-12 period, data tariffs rates which are too high for the average Indian ensure that data is yet to drive the profitability of these companies meaningfully. The road to providing differentiated services to customers is still not taken.

And employees? They are usually the last to benefit in any business cycle, and my own guess is that wage hikes and addition of hires would only begin after profitability of the companies improves significantly. 

Final Thoughts

Competition is supposed to be an unmitigated good. However, the telecom industry does not seem to accept this, and globally we tend to see that the industry is captured by 2-3 players. There are multiple and complex reasons for this. The jargon terms here tend to be network effect, regulation, spectrum auction policies and stuff like that. Whatever the reasons are, the impact of having multiple players is not an unconditional good. 

In later blogs, I shall try to use the three party stakeholder system of economic analysis on other industries to see if any conclusions can be drawn. But this example is one that makes me wonder whether free markets are actually a stable equilibrium state, or is like a tripod, which can be tipped over with an imbalance from any side?

 

General stuff

blogging from cell phone

This is a test post from my phone. I’m pretty impressed with how it works.  In fact it is so good that I think it can be used for all my posts in the future.
Hopefully, this will increase the number of posts I will be able to put out.  considering the stuff I think number is zero, I guess anything would be better.

Speculation

Thinking and computing how human beings remember –II

This is part II of the post that I began a couple of days ago. In the previous post, I covered how human beings remember, and also to a certain extent, how computers work.

So now, lets cover some more complex memories. How do we remember things like say Newtons First Law of Motion? Oh wait, I know this one. “Any body remains in a state of rest or in a state of uniform motion in a straight line unless acted upon by an external force”. It took me a couple of seconds to remember this, but I think it’s a pretty good definition.

The definition of Newtons First Law of Motion is a pretty standardized thing. It involves a sequence of words which are in a very clear sequence and make absolutely no sense if you do not put them in the right order. I can’t remember this if I put it in my splendid data warehouse of my previous post. If i try to remove any pieces of data in this definition in the interests of quicker recall, I will screw up the meaning itself, which just would not do. So how is it done.

Again, the mechanics of it are probably beyond my abilities. Something to do with REM Sleep for all I can tell. But conceptually, we can think of this procedural memory or sequential memory more in terms of our standardised data warehouses that exist today with computers. If a computer were asked to quote Newton’s First Law, I am certain it would do a far better job than your humble blog author here.

This is a part of information retrieval that the computer is inherently better than human beings. We suck at definitions and tests. It took me almost 15 days of slogging 4 hours a day to memorize the key parts of my commerce textbook. And yet, the only part of it that I can remember 14 years later is “Money is a medium, measure, standard and a store.” That is pretty bad, considering I can still do a 10th standard Physics Paper today and expect to at least get 75-80%.

So what makes commerce so hard to remember and physics so easy to? Is it because I use physics every day, and commerce never? Well, that does not wash, because I work in a bank! The closest I get to physics is the advertisement for Physics Tuition’s that I see while standing and waiting for my train to take me to work! In my case, I suspect it’s because I genuinely learnt the concepts behind each of those physics definitions I memorised. So, I understand that uniform motion in a straight line can be expressed as similar to a state of rest in an ideal world, however odd it may seem at first glance.

In someone else’s case, this might be completely different. They might consider the 10 features of a joint stock company to be so logical that they might not be able to understand why I could not remember them. (My mom is one such example). But once I get this insight, the rest of my theory is now obvious. Even in cases where we memorize extremely long sequences of events, long-term recall is ensured by understanding of the basis of such a sequence. Else it would seem exceedingly unlikely that any person would be able to memorize and retain the memories in a loss-less manner unless they either understand the concept/context of the memory perfectly, or if they relive the memory so often that its brought into the intrinsic data warehouse explained in my previous post.

One final example to belabor the point. If someone were to ask me what is 12 times 12 I would be able to answer 144 without a second thought. But thinking about it, the concept of arriving at 12 times 12 is quite laboured. If I tried doing it from first principles, it would take me at least 5 minutes to re-understand the concept, and then to apply it to this specific case of 12×12. This is the fundamental difference between the “intrinsic” memory of the brain, where we store our most our automatic responses, and the “learned” memory of the brain, where we store corollaries, and concepts that don’t need reusing all the time.

Now, this does not really add anything. It’s just a neat (in my opinion) understanding of the way memory works. But the key is if someone were to figure out a way to segregate these memories, build short-term data warehouses that computers could access instantaneously and “guess” answers while they search their complete data banks to get the “right” answer, we would have a pretty neat form of AI. Now, I have not seen Watson, who is supposed to be the computer who is the Jeopardy Champion, but maybe Watson’s AI has this sort of guess algorithm, where if Watson has an answer which is significant to 2 sigma, he presses the buzzer.

I have no clue as to how AI algorithms and machine learning theories work. But if the any of them use the thought process that I have set out, it would be intriguing to see where it would finally end up.

P.S: If you, oh reader do know AI programming and think my views are hokum (or otherwise), I would love to be told so. I would love comments on this topic, especially as my own knowledge of this field is so inadequate.

Uncategorized

Thinking and computing — How do human beings remember?

Yesterday, I spent some time with a couple of friends trying to find an answer to one of man’s oldest questions. “How do we think and remember?”

First off, let me say this. If you are looking for mysticism and godly type stuff, this is not the page to visit. Second, these thoughts are my speculations. I have no background in this stuff so if you, oh fine reader are an expert and know more than me, please do enlighten me. I cannot pretend an expertise I don’t have.

If any of you have queried computer databases, you have probably sat twiddling your thumbs while the database looks through every row and column of the entire data set trying to find the relationship that you are looking for.

But if you ask a person, “do you know that guy?”, the other guy would probably be really quick at figuring out if “that guy” was really known to him or not.

If you think about this, its amazing. Modern hard discs can parse through a few hundred Megabytes of data in a couple of seconds and can literally do trillions of operations every second. And here, a humble human being can parse through all the data points that make up his/her memory and remember a face or an incident.

I am not going to try to figure out the mechanics of how the brain gets to parse through its data banks. I am sure it has to do with synapses and neurons and proteins and stuff. But I am going to try to look at it as a data manipulation problem. Assuming the brain stores a 100 billion bits of information, and that it would take a finite amount of time to go through those 100 billion bits, you can sort of work out the time taken to recall a particular set of information.

The problem is that for the brain, its not exactly a linear search process. For example, we can remember the sequence of numbers from 1-10, but at the same time, have difficulty remembering certain words…(its at the tip of my tongue!). Essentially, the number of bits of information in a word and a sequence of numbers may be the same, but the way the human brain retrieves this information is different.

Of course, I am not the first to come up with this funda. People before me have come up with this “short term memory” and “long term memory”. They even make funky analogies of RAM and Hard Disk Drives to explain it.

But I think I have stumbled upon an idea. Its always puzzled me how human beings are so awesome at retrieving information compared to computers (which ought to be even more awesome, considering their perfect recall and blinding speeds of manipulating data.). Then I actually sat and thought about it. Does having more data and perfect recall actually make you better at retrieving information….or worse!

Let me illustrate this with a practical situation. A computer records a face perfectly. It stores Ms. N’s face as a 1920×1080 pixel image, with each of those pixels having several properties (colour, tone, brightness, etc). This essentially translates into a huge pile of data which can be up to 8Mb in size. In my head however, N’s face is not stored at all (she would be the first to agree). Instead the idea of N is stored. Its a bit hard to explain because everybody stores personal details differently in the privacy of their own mind. But the best I can come up with is that N’s face is buried into this really complex data warehouse, where the number of bytes that actually represents her face is a lot less than 8Mb….it might only be a few bytes of memory.

But the advantage it gives me is game changing. Firstly, as the data that is N’s face is actually stored as part of  a data query, and is a lot smaller in size, my comparatively slow speed of access is offset by a huge advantage in my software. My mental database is exceedingly efficient at parsing through massive data sets precisely because I don’t actually store all the data that a computer stores.

This of course leads me to a weakness. I cant exactly tell you how every pixel of N’s face looks. At best, I can recall maybe 10% (probably closer to 1%) of N’s facial features. My computer is probably way better than me at remembering those details. So if I want to figure out how N looks like, I don’t try to bring a picture of N in my minds eye. I just do not have the capabilities to do that good a job of it. Instead, I take a look at her photograph, which is way more detailed than I could ever be.

In my next post, I shall try to cover the other part of memory….which is rote memory. Its how I passed my commerce examination after all! Finally I hope to draw an analogy between these types of memory and data manipulation by computers.

So stay tuned. Till then ta ta!