Primary navigation:

QFINANCE Quick Links
QFINANCE Topics
QFINANCE Reference

Home > Blogs > All About Alpha > Rowady’s Thoughts on Volatility, Data and Crowd

Rowady’s Thoughts on Volatility, Data and Crowd

Rowady’s Thoughts on Volatility, Data and Crowd All About Alpha

If you want to value the derivatives in your portfolio accurately, then you’d love to predict the forthcoming volatility. Right? Since you can’t, you’ll make do with either implied or historical volatility. Still, what you’d really like, the “holy grail” of the knights of valuation, is to know what volatility will be during the life of the instrument.


As Paul Rowady of TABB Group has put it, one reasonable inference from this equally reasonable premise is that innovators should have been putting a lot of emphasis on the integration of catalytic data – that is what it sounds like, data on the sort of events that catalyze big price moves, like upcoming earnings, or strategically important shareholders’ meetings.


Limits to Integration


Rowady says that as of the early 1990s there were at least three important limits to this sort of catalytic-data integration: metadata tagging hadn’t yet become readily available; no appropriate historical archive existed (for the back-testing of estimates etc.); and there was no “high-performance infrastructure for dissemination and digestion of largely unstructured data...


At that same time, though, specifically in 1993, the Journal of Derivatives published an article by Galen Burghardt & Gerald Hanweck, ”Calendar-Adjusted Volatilities,” addressing precisely this point. They offered an analysis of the relationship between specific events and incremental impacts on volatility with an eye to options pricing.


There have been many pertinent developments since. Rowady’s recent article for the website of the Global Association of Risk Professionals focuses on the lifting of the three constraints mentioned above. First, as to metadata tagging: XML 1.0 was defined in 1998. While XML was still in development, JPMorgan [which was not yet JPMorganChase] teamed up with PricewaterhouseCoopers to develop XML for the use of financial products, and the first result, a draft standard for interest rate swaps, was announced in 1999. From such beginnings came FpML and RIXML, both still very much works in progress.


Second, there is the matter of archives and back testing. For these purposes, it is important that the Extensible Business Reporting Language (XBRL) has now been the standard language for IFRS, for the US GAAP, and the Securities and Exchange Commission for several years.

A Groundbreaking Uptick Approaches


Finally, there is the development of the high-performance infrastructure. There has been a lot of this. Rowady writes with some enthusiasm of the social media phenomenon as allowing institutions to crowdsource high-value data on catalytic events “from just about anyone, anywhere, and at any time,” though he acknowledges the need to filter out noise in the process.


There has also been a lot of ingenuity devoted to the enhancement of reaction speeds through ultra-low latency machine readable data products. These have brought reaction times to “millisecond and microsecond distribution speeds for the benefit of high-turnover trading strategies”.


With all of this, Rowaday says, “solution providers” are at present “on the verge of having most, if not all, of the raw material they need to achieve a groundbreaking uptick in clarity about the event horizon for purposes like volatility estimation and risk analytics.”



When Rowaday’s column was reprinted on the TABB Forum website, the above mentioned Gerald Hanweck added a comment.

A Warning


He wanted his own work with Burghardt to receive proper recognition, and he wanted to observe that the options markets (a form a crowdsourcing themselves) have been aggregating vol-defining information roughly since the time of that publication.


Beyond either of those points, Hanweck wanted to raise a concern about the sort of crowdsourcing Rowaday has in mind. It may end up, he warned, looking a lot like a conduit for material, non-public information, and thus it, may raise “the wrath of the SEC.”


That is a valid concern. The idea of crowdsourcing shades off into that of “expert networks,” and recent history illustrates pretty dramatically that the SEC takes a jaundiced view of those.


Still, it should be possible for financial engineers and compliance officers to work hand in hand to get the world of derivatives markets past this particular event horizon, in a way that won’t have the SEC standing heroically on the railroad track shouting “stop.”


This article was written by Christopher Faille and originally published in AllAboutAlpha under the title: Rowady’s Thoughts on Volatility, Data and Crowds



Tags: crowd , data , derivatives , volatility
  • Bookmark and Share
  • Mail to a friend

Comments

or register to post your comments.

Back to QFINANCE Blogs

Share this page

  • Facebook
  • Twitter
  • LinkedIn
  • RSS
  • Bookmark and Share

Blog Contributors