Tech. Decoded.
Emerging technologies and the future of finance.
Welcome to Tech. Decoded.
Read on for an introduction from Bloomberg's CTO, Shawn Edwards,
and explore the report contents and topics.
Contents
Note from Bloomberg's CTO, Shawn Edwards
Automation, artificial intelligence and machine learning are poised to transform the financial services industry. These technologies are growing in sophistication and driving investment, influencing decision-making in areas such as data analysis, investment strategies and workflow automation. Featuring articles and insights from across Bloomberg, this report explores developments related to these technologies and their impact on the future of the industry.
As AI and other advanced technologies permeate the workplace, skills like critical thinking, creativity, and problem solving have become more and more important. Leading companies are recognizing that these technologies are most effective when they complement humans, instead of replacing them.
While education and retraining are critical to adapting to new technologies, there is no doubt of the value to be gained: AI and people are smarter together. We will look at the ways in which this combination of automation and critical thought is being used to rethink the value extracted from data.
For example, sentiment data is a major source of untapped value that can complement and augment traditional analysis of fundamental or technical data. Similarly, automated trading and trader augmentation tools can streamline time-consuming compliance, allowing for greater productivity and an increased focus on client services.
Financial services firms are also improving their compliance and risk management processes – with many putting AI to work to augment their current processes and better work within systems.
And firms aren’t the only ones eyeing AI and machine learning: regulators are as well. Increased regulatory reach and complexity are motivating companies to anticipate potential roadblocks to using new technologies -- and are adjusting to avoid them.
Technology trends like these have the potential to disrupt and radically transform all industries. From investments in cloud computing to advances in machine learning, there are opportunities and lessons to be learned and applied across industries. We'll take a close look at the digitalization of the energy sector and growth of investment in sustainable technologies.
Despite efforts around diversity, women remain underrepresented in the technology sector. Bloomberg, in its role as a tech innovator, needs to tap into all potential talent and has seen that teams with members of diverse backgrounds perform better and produce more creative ideas. We see the implicit value in fostering a vocal and growing “Women in FinTech” community through events, networking and other initiatives. By encouraging collaboration and professional development, such groups inspire future leaders and further essential conversations about diversity and inclusion and offer ideas on ways to overcome outdated cultural norms.
For those looking to digitally disrupt and evolve their business, these innovations present a wealth of opportunity. I invite you to contact me regarding any of the topics raised, or to discuss how Bloomberg can further help you and your team.
Shawn Edwards,
CTO, Bloomberg L.P.
Topic tags as a key tool for optimizing
sentiment analysis.
The mining of data sources is an essential component of modern trading strategies —
and topic tags are key, helping machines mine unique datasets for market-moving signals.
The rise of algorithms is giving Wall Street a makeover as traders and investors look to high-powered machines and unique datasets for market-moving signals.
The mining of alternative data sources has become an essential component of modern trading strategies that firms cannot afford to ignore.
All this makes the systematic analysis of news stories an appealing idea. Businesses were quick to embrace news and social media analytics as a data-driven tool for the brand management and targeted advertising.
Today, that same content is processed by quantitative hedge funds with higher precision and faster speed to uncover predictive signals that can be used for better trading decisions.
Textual news is usually processed with natural language processing (NLP) techniques, a computer science field that has been around for decades.
One example of an NLP task is sentiment analysis, where each news story can be classified by its underlying tone to decipher potential impact on a stock's price.
For example, a news article about-better-than expected quarterly earnings might get scored as having positive sentiment and lead to a pop in the stock price whereas a news article about an analyst downgrade could be scored as negative and result in a correction.
With the greater availability of open-sourced NLP toolkits and services, it can be tempting to build the system by hooking up an off-the-shelf algorithm with an aggregated live newsfeed.
Yet, problems arise when machines attempt to interpret specialized language. A word like “magnificent” would normally be declared as positive by most algorithms even if it appears in the context of Magnificent Hotel Investments, according to Ivailo Dimov, Quantitative Researcher and Data Scientist at Bloomberg.
Without specialized training in financial-oriented domain knowledge, most general purpose NLP algorithms fail to note the subtleties, which can lead to skewed sentiment scores and fatal results in trading performance. “If you can’t discern whether the text applies to a company or business situation, it can result in noisy and erroneous data,” says Dimov.
In response to these challenges, Bloomberg’s machine-readable news and analytics feed uses proprietary NLP models that are trained based on large volume of carefully annotated historical financial news. Sentiment scores are generated on individual stocks mentioned in the article.
In addition, stories are also tagged with a rich set of topic tags to further categorize content characteristics and themes, such as technology <TEC>, analyst changes <ANACHANGE>, or downgrades <ANACUT>. “With topic tags we can gather more relevant information about sentiment than the raw text itself,” says Dimov.
Improving group code through component analysis
Since Bloomberg collects and internalizes data from a wealth of sources, they have over time developed a robust solution to generate topic tags with greater accuracy.
In most cases, a given news story may have more tags than are necessary in an attempt to capture all relevant information while avoiding potential errors. Meanwhile, the entire topic taxonomy contains tens of thousands of unique tags, with heavily skewed long tail distribution.
This presents nontrivial challenges when one tries to utilize topic tags to further enhance sentiment-driven strategies. Proper dimension reduction is needed to associate tags of similar meanings so that they can be treated holistically as a group.
However, traditional techniques such as Latent Semantic Analysis bases the analysis solely on term co-occurrences, which turns out to be very noisy due to the high dimension, parsimonious distribution. As a result, it tends to group topic tags together even though they don’t exhibit a clear logical relationship.
What is your opinion on the market impact of news sentiment?
- I think it is the most important factor — stock price is driven primarily by crowd sentiment instead of company fundamentals.
- I think news sentiment is one of the many factors that drives stock price.
- I think it is more useful in specific period of time, such as post-earnings, rate decisions, etc. Other times it is just noise.
- I think it is basically noise.
Ivailo and his colleague Daniel Lam, Senior Quantitative Researcher at Bloomberg, together developed a novel mathematical approach called π-component analysis to better understand and group the codes in a maximally cost-effective, parsimonious way.
When combined with sentiment analysis, groups of topic tags identified by π-component analysis systematically show stronger impact of sentiment on the prices of certain stocks – evidence that structured news sources play a valuable role in the search for alpha.
Machine learning plays a critical role
in improving data quality.
As the world of data grows at an almost incomprehensible rate, companies are sitting
on enormous data reserves that, as yet, remain untapped.
The world of data is growing at an almost incomprehensible rate: the size of the digital universe will double every two years at least.
As a result of data proliferation, many companies are sitting on enormous untapped data reserves, but they are often scattered and in incompatible data formats.
Companies with a data capitalization strategy are investing to insure they can extract as much value from their data as possible. A key component of any sound data strategy includes a robust data quality process.
It is tempting for companies to consider short-term solutions and manual processes for data scrubbing, but for any long-term and repeatable data-related strategy, an algorithmic approach is appropriate.
Both a challenge and an opportunity
For financial services firms in particular, big data presents both a challenge and an opportunity. Currently, businesses have more data at their fingertips than ever before, but understanding and effectively using this data can still be difficult.
According to Matthew Rawlings, Head of Data License at Bloomberg, problems arise from the fact that “it takes a lot of manual effort to clean and run that data and add some business intelligence on top of it.”
Many firms have faced a time lag in making data-driven decisions — by the time the data is located, tidied, sorted and applied, it is virtually out of date and no longer relevant. Firms can run into significant issues — both regulatory and business-related — if their data quality is not up to scratch.
Indeed, in a pre-conference survey of delegates heading to the 2017 North American Financial Information Summit, just over half (51%) cited data quality as their biggest immediate hurdle.
A year-long process — in a day
Perhaps due to some of these drivers, a growing number of early adopters are turning to machine learning, a process that utilizes sophisticated artificial intelligence to effect something of a technological revolution in the data-quality world. AI’s capabilities are at the tipping point of exponential adoption and impact.
"AI is important because it compresses the process. You take what was a year-long process and the machine can get it done in potentially a day, so you can test hypotheses and act on them much more quickly."
Matthew Rawlings, Head of Data License, Bloomberg
To illustrate this, imagine a large bank that regularly deals with NatWest (National Westminster Bank). Across different business units, databases and spreadsheets, there can be many variations on the same client name — perhaps simply appearing as County NatWest, Nat West or National Westminster and so on. Reconciling all of these entries would take significant manual work.
But a computer program can theoretically scan and process data from across the bank and deliver all of the matches in a matter of hours. “Suddenly the bank can see instantly, at a corporate level, its entire exposure to NatWest,” explains Rawlings. “This enables faster, better decision-making,” he added.
This process, or name-identity recognition, is just one of the areas where machine learning is capable of making a radical difference. And the process improves over time.
In the NatWest example, the original scan may flag say 10% or 15% false positive matches on its first attempt. Through continuous feedback, it is then capable of learning from the false positives and applying the adjusted rules to the next set of data. This constant evolution is what makes machine-learning technology so effective at scrubbing and verifying data at speeds previously thought impossible.
Ensuring data quality with machine learning
Using technology of this kind can ensure data quality across the enterprise. During a webinar, John Randles, Chief Executive Officer at Bloomberg PolarLake, recalled the story of a large global asset manager.
“We discovered millions of mismatches between the metadata which was describing that data and the actual source data itself and over a 15-month period eliminated these issues,” crunching the number of problems with the dataset down from the millions into the thousands.
Using the right technology can provide a firm with one of its core needs — data in context. Context is the most important aspect of getting staff to appreciate data quality, according to Sanjay Saxena, Head of Enterprise Data Governance at Northern Trust Corporation. “When you can explain it in terms of their day-to-day job, you see the light bulb come on,” he said during the webinar.
Data management best practices have been significantly improved by a combination of the plummeting cost of computer processing power, increasing availability of data, and the democratization of open-source machine-learning tools, which allow any company to become AI-enabled.
The new data science methods and best practices allow for the distillation of billions of data cells and rows into meaningful insights. Data quality will continue to be a differentiator for any institution’s data insights.
Ultimately, humans cannot scale at the rate needed to interpret data in zettabytes, which is why a foundation of machine learning is so important.
Extracting value from social and
news data.
Social sentiment is increasingly seen as a major source of untapped alpha —
with conversations on Twitter providing potential insights into future moves.
Now, it’s time to get quantitative.
The use of machine learning and AI techniques has opened new avenues for quantitative fund managers to derive value from traditional and nontraditional data sources everywhere in the world.
Investors are starting to obtain powerful insights from conversations and interactions taking place in the news and on Twitter.
Investors are discovering that sentiment data is a major source of untapped alpha — unlike the traditional financial data or strategies previously available.
This is creating a massive opportunity, as everyone is looking for an edge in getting relevant market-moving information ahead of other market participants; however, to capture this opportunity, investors must embrace some quantitative practices.
Identifying a reliable signal isn’t as simple as reading the news or following the right people on Twitter. It requires human intuition to underpin the strategy, infrastructure to handle large volumes of data and machine learning to model that data.
Cleaning and handling social data
Recent breakthroughs in technology have made transforming the many varieties of data more effective. One method used is to employ natural language processing to extract and tag relevant information buried within troves of unstructured text.
“This involves defining words in the correct context,” says Arun Verma, Senior Quantitative Researcher and Head of Quant Research Solutions at Bloomberg. “Cook and Apple on their own could refer to a recipe, but together in a string of text, it likely involves the company Apple (AAPL).”
One method is “named entity disambiguation,” which determines items in a Twitter stream or news article that link to a company name. It’s a necessary step in processing text for analysis.
Before fitting a model though, a human being must label stories in the training set — a portion of labeled data used to teach a model. An algorithm studies the different relationships in this sample data before a fully trained classifier tests the remaining observations. Using high-quality, labeled training data improves the chances that the model will find a pattern that repeats itself.
To label the text accurately, a training set is curated by human experts who assign a sentiment score to each story in the set from the perspective of a
long-term investor in the company. They focus solely on the text instead of the outcome, thus scores don’t reflect any subsequent price movement. Once models are in development, further testing can be used to check the accuracy of manual classification.
Finding a signal in the noise
From here, the labeled data can be fed into a machine-learning model such as a Support Vector Machine (SVM) that determines whether the story belongs to a specific class.
An SVM training algorithm classifies the text into two groups with different features. When dealing with sentiment, where stories are scored positive, negative or neutral, a more nuanced approach is required. Verma observes that Bloomberg applies multiple support vectors and pairwise classification to convert a multi-class sequence like sentiment into a series of two-class problems.
Each SVM operates in a high dimensional space and follows the bag-of-words framework — a catalog of words related to finance and investing. That way the training algorithm can discover an optimal separator between each class:
- Positive-neutral
- Positive-negative
- Negative-neutral
“The results of the three binary classifiers are fed into a new machine-learning model like K-Nearest Neighbors (KNN) to classify stories without a clear sentiment into one of the three classes,” says Verma. KNN analyzes and
categorizes stories in real time based on cases from training data found in the neighborhood of the target story.
To check if the machine-learning model is performing well, the next step would be to construct a confusion matrix that maps predicted classifications against actual classes.
The correct predictions fall on the diagonal, while misclassifications sit on the off-diagonal entries. It not only validates or discredits the algorithms, but also the human experts who labeled the initial data — providing a starting point for making improvements.
Of course, fixing every problem can lead to overfitting. The classic way to handle errors without overfitting is to divide the data set into three sets:
When improvements in the training set do not coincide with the test set, it’s a strong signal to stop fine-tuning the model.
“In the end, we want the machine to help improve human behavior, and vice versa,” says Gautam Mitra, founder and CEO of OptiRisk Systems; Mitra spoke at a recent webinar on social and news data.
What is your opinion on the value of systematically adding news-related info to your workflow?
- It is extremely interesting, given the latest boom of big data and machine-learning technologies.
- It is worth exploring if I have the time and resources.
- I doubt the value, given that the news space contains so much noise.
- I am not interested at all.
Webinar:
"Tweets are moving markets: How to harness Twitter as a source of News." Access it here.
Long-term performance from sentiment
When the model and data are in good shape, they can combine to be a powerful tool for predicting price movement. Naturally, positive information about a company or industry might translate to greater buying activity, whereas negative press may precede a sell-off.
During a recent webinar, Verma demonstrated the benefit of trading sentiment with three different long-short strategies:
- Long the top 1/3 of stocks and short the bottom 1/3
- Long the top 5% and short the bottom 5%
- A proportional portfolio of long and short positions bounded by the mean
Each strategy ranks stocks by daily sentiment before the market opens and closes existing positions at the close.
The results exhibit a strong synergy between news and Twitter data that outperforms any individual factor over a 1-year period.
Regulations are both an obstacle and
a boon to AI adoption.
When it comes to AI adoption in the financial services industry, increased
regulatory rigour can both giveth and taketh away.
Artificial intelligence in the financial services industry has been a much-hyped technology that will dramatically change the workflows and processes across many traditionally human-centered aspects of Wall Street across sales & trading, investing, banking, FX, and compliance.
Perhaps the pace of change has been a bit overpromised but AI is nevertheless impacting many verticals with an acceleration of new live applications and technologies. The increasing reach and complexity of regulations are making it more critical for companies to automate a significant percentage of the compliance process.
Since the global financial crisis in 2008-2009, regulators have ramped up reporting requirements to increase transparency and adherence to regulations.
The net result is that stringent reporting requirements have increased the costs and size of the compliance teams and in particular has dramatically influenced the amount of data being created and recorded.
This is not necessarily an impediment to doing business, but has required a significant learning process for banks, regulators and tech firms. Though regulation could be a speed bump towards AI adoption in some cases, it can also create opportunities in the short and long term.
The new explosion of data
Financial institutions today are reporting more information about the strength of their balance sheets, liability and off-balance sheet exposures, liquidity measures, collateral and capital levels, and reporting those details more frequently and retaining more granular history than ever before.
MiFID II, perhaps the most sweeping European regulatory requirement to affect the financial industry in decades, will drive the creation of a digital goldmine of trading data. This regulation has created the need to record large amounts of data that is well-defined and structured for regulatory review and sharing across counterparties and trading venues.
Under MiFID II, which went into effect in January 2018, firms are reporting various pre- and post-trading data, but also venue of execution, venue of publications, transaction ID code and much more.
Much of the data is new and mining it can help firms create better analytics — determining which venue is better, slippage cost, aggregated market snapshots, liquidation cost and other advanced outputs.
Data scientists are grappling with the huge amounts of data that’s available — 50-60 billion points of data a day in trading. Those data points can be used to depict the current state of the markets more accurately in real time and be used for prediction models using AI and other advanced statistical techniques.
An evolving symbiotic relationship between regulators and firms
In addition to spurring an explosion in data, new regulations are also creating opportunities for AI to prove its usefulness — by helping firms comply with regulations.
Financial firms are looking for ways for AI and machine learning to help streamline regulatory reporting and compliance. (Machine learning is a segment of AI based on the principle that given enough training data, machines can learn for themselves.)
It’s projected that the growing industry, known as “RegTech” could reduce the cost of regulatory compliance.
The increase in the number of regulations and the complexity of multinational companies have made it difficult just to stay on top of new regulations. RegTech has primarily focused on making reporting procedures — such as know your customer rules, tax reporting, or anti-money laundering rules — easier for firms.
One RegTech startup in particular is developing software that lets banks process tax forms to meet compliance rules in real time rather than batch.
RegTech is also helping firms make sense of regulations and see how it might apply to them. RegTech firms are experimenting with platforms that can streamline regulatory research, for example.
These solutions can integrate various regulations directly into compliance workflows — allowing companies to streamline organization structures, policies, and interpretations. RegTech solutions can alert management to gaps in control requiring attention — giving firms a chance to identify problems ahead of time rather than being reactive.
One law firm worked with a tech firm to develop an automated toolkit to help firms assess the impact of MiFID II regulation, allowing firms to filter through thousands of pages of regulation to find the relevant areas to a firm’s business type, clients and products. Firms are also developing chatbots to offer expert advice on a particular section of a regulation.
Firms aren’t the only ones eyeing AI and machine learning, regulators are looking as well. The Financial Conduct Authority, an independent U.K. financial regulatory body, has said it is looking into the possible use of AI and machine learning to enforce regulatory compliance. The FCA has also said it’s looking into making its handbook machine-readable — meaning machines would be able to interpret and implement its rules directly.
As AI evolves, regulations and regulators will play an important role both hindering and facilitating its development at alternate turns.
By pushing financial firms to become more transparent, regulations such as MiFID II are the catalysts for an enormous database of valuable information that can allow firms to automate efficiencies into various processes, refine automated trading for smaller types of trades or write better algorithms.
Firms that look opportunistically and get ahead of new regulations could give themselves a competitive leg up.
Which of the below technologies being used currently/evaluated are increasing efficiencies of Risk Management or have potential to do so within the finance function?
- Predictive Analytics
- Artificial Intelligence
- Data engineering
- Robotic process automation
- All of the above
From risk analysis to returns forecasts,
machine learning is guiding investments.
While machine learning cannot do everything people can do, the technology is
finding traction in the field of finance.
Machine learning cannot do everything people can do but the technology is finding more widespread use in the field of finance.
Barry Porter talks with Gary Kazantsev, Bloomberg’s Head of Machine Learning, Gideon Mann, Bloomberg’s Head of Data Science, and Bruno Dupire, Bloomberg’s Head of Quantitative Research, about the possibilities.
Q: What is the biggest misconception about machine learning in finance?
Gary Kazantsev: That it is some sort of a magic wand that will solve hard problems in contravention of truths known from basic statistics.
No amount of machine learning will help if the problem you are trying to solve is ill-posed, or you don’t have a sufficient amount of data, or if you aren’t careful about issues like non-stationarity and bias.
Gideon Mann: One major misconception is that machine learning can do things that people cannot do — that it can magically accomplish things that tax human ability. Typically, the biggest impacts of machine learning come by automating simple and straightforward human decisions, but doing it on a cost basis that makes various processing economical. This, in turn, leads to the appearance of magic.
Q: What excites you most?
GK: The range of available problems that are now possible to tackle using machine-learning methods.
Bruno Dupire: The challenges that AI throws at us, how it forces us to question what constitutes our essence as human beings. Domains of competences formerly thought to be our unassailable kingdom are surrendering one by one, redefining ontological issues.
Two big questions are:
Can machines perform all our cognitive tasks?
And, if they can, are they going to perform them much better than we do?
Q: How advanced is machine learning in finance today?
GK: It depends. The range of problems being attacked and the methods used is now vast and rapidly expanding. We are familiar with organizations which do end-to-end strategy development (from portfolio selection to execution) as an ensemble machine-learning problem.
There are also plenty of firms who are now only starting to investigate this field.
The level of acceptance of new technology in financial institutions varies depending on their acceptable risk profile, specific requirements for interpretability and transparency of models, and even geographical region. This applies to machine learning even more so than many other technologies.
BD:
"Quantitative finance is a natural field for ML as learning to establish links between input data and returns is very valuable."
Bruno Dupire, Head of Quantitative Research, Bloomberg
It is still in its early stage, but catching up very quickly, avidly. Data, both structured (security price time series, fundamentals) as well as unstructured (text from news/tweets/call transcripts, net searches, satellite images) are systemically exploited and the array of methods is ceaselessly expanding.
Random forests, support vector machines, knowledge graphs, recurrent nets, LSTM (long short-term memory), convolution nets, GAN (Generative adversarial networks). It has changed a lot since I initially used neural nets to forecast financial time series in 1987.
Q: How are sophisticated clients using machine learning in their workflow, and how is it impacting investment strategies?
GK: We have seen everything from counterparty risk analysis to optimal execution, and from predicting bankruptcy risk to forecasting returns, earnings or unemployment statistics.
It’s also being used in portfolio construction, sentiment analysis of financial news and so on. Machine learning is becoming an integral part of the toolbox used in creation of systematic strategies.
How are you using machine learning in your workflow?
- Using it only for ad-hoc research purposes
- Generating signals/factors which then get vetted/adapted by experts
- Using it systematically in production
- Not incorporated at all
Q: What is driving investment and attention in machine learning in the financial industry?
GM: Machine learning has had an enormous effect on other industries and has driven significant growth. Think Google, Amazon, Facebook. There are also an increasing number of financial firms that have been able to harness machine learning to drive value.
Finally, the pressure to trim costs has focused firms inward to see if they can do more with less, and enhancing employee productivity through augmentative technology has become more appealing.
Q: What new Bloomberg machine-learning application or tool are you most proud of and why?
BD: We are building a machine-learning prototyping suite that enables the user to access scikit-learn, TensorFlow and our own functions, in a very user-friendly interactive environment.
It offers multiple ways to visualize the data, the progress of the learning and how the algorithm operates.
GM: We have made significant investments in our neural network infrastructure, and because of our efforts have seen numerous examples of deployed neural network models.
From these, the effort in understanding tables has particularly made me proud as it demonstrates the power of these new technologies on a thorny old issue.
GK: I am particularly proud of the work we have done on question answering. We have been able to make an impact on the way clients use the Terminal despite this being a very challenging open problem.
Q: Can you give one prediction for the future?
GM: I think the future is likely to be increasingly characterized by fairly stable periods interrupted by very rapid changes as the speed at which information and technology gets disseminated increases.
GK: Sea levels will rise, markets will fluctuate, and deep learning will not give us true human-level artificial intelligence.
BD: For advanced tasks, it is not enough to let data drive the learning process, one also needs to inject expert knowledge, leading to hybrid systems.
"I think the community will soon realize that neural nets, however deep they are, cannot solve every problem. We are likely to observe a merging of neural nets and logic-based systems, especially when data is scarce."
Bruno Dupire, Head of Quantitative Research, Bloomberg
Trading desks:
Toward an augmented future.
New duties as a result of regulation are limiting time sales traders can spend
with clients — here’s how technology is changing that.
In an era of increasingly stringent regulation and cost constraints, compliance duties often eat into the time available for sell-side fixed income traders to do the essential work of interacting with clients — a state of affairs which threatens to reduce the overall profitability of desks.
There is hope for a solution on the horizon, however, as trader augmentation tools help traders and the trading desks automate key activities and, consequently, shift back the attention to the client, thus improving client service.
The changing role of the sales trader
Greenwich Associates reports that fixedincome broker-dealers have found their “new normal in doing more with less”. The average sell-side fixedincome trader manages relationships with 20 or more clients in an environment of shrinking profit margins, declining balance sheets and ballooning regulatory costs.
The regulatory and compliance workload of head traders has increased exponentially — and the constant regulatory and cost pressures don’t seem to be going away.
All of the respondents in the Greenwich study rely to some extent on technology to ensure they remain in compliance with regulations. Regulatory technology has become big business — the industry spends $15-20 billion annually on regulatory technology but yet only 11% of respondents said they relied completely on technology to ensure compliance.
Compliance automation will be a must for any trading desk hoping to remain relevant in the coming years, the desk’s increased time spent with compliance and away from clients drives that point home.
Regulations require firms to capture data that was not previously collated, and this data is becoming the fuel for new trends such as trader augmentation.
Experienced traders often manage the largest and most complicated accounts for the bank, which represent the most value to the firm. The use of augmentation technology here would seem a foregone conclusion.
At the other end of the scale, there has been a notable "juniorization" of sell-side trading desks, with less experienced traders employed. Encouraging these traders to utilize sales trading tools now will likely benefit both the desk’s P&L and those selling these products.
Head traders should be considering how compliance processes can be automated and asking themselves and their colleagues the following:
- Do we have the right tools in place to automate compliance checks?
- Will automation lessen the need to interact with compliance teams?
- Are there better ways of providing our compliance teams and desks with the tools they need?
How critical is trading automation in your overall workflow?
- Critical
- Critical but not sure we feel comfortable with complete automation of trading
- Not critical
Deciphering the value in sales technology
Greenwich’s research also found that the clear majority of sell-side traders do see value in the sales technology on their desk. Why? Simply put, these tools can allow them to reach a broader audience even as they maintain or improve service levels.
Trader augmentation tools are not about "automating away" salespeople — they are about providing new datasets that enable salespeople to provide clients with a unique idea.
For example, trades that previously were difficult to execute and may have remained on the blotter in the past can be executed if a sales trader’s ability to find the other side to that trade is improved in the order management system.
The age of big data and technologies such as machine learning can help traders to get much more value out of information, including data related to clients, holdings, trades, and even events that were not traded.
All of the queries a client contacts a trader about can be incorporated into the augmentation tool. And the next time an investor says he is interested in something, the trader has all of the information on hand about what that investor did previously. All of this and more can be fed into trader augmentation tools from an OMS and also from other systems such as instant messaging.
Though regulations are often viewed as a burden, the data that now must be captured can provide traders with useful intelligence. So, there’s an upside here — why not take advantage of all of this data you are capturing?
Augmentation is not about replacing traders; human relationships will always drive the capital markets.
By automating compliance processes and leveraging the data regulators require firms to capture, head traders at fixed income desks can reclaim some of their time and shift their focus back to clients, providing improved information and better execution.
Tech update from BloombergNEF.
The energy industry is being transformed by new technology — and these are the
lessons and trends other industries should look out for.
Trends in technology have the potential to disrupt and radically transform industries. These developments fuel a push toward innovation and force companies to move quickly to avoid complacency. The energy industry provides an excellent example, with established utilities and new energy start-ups alike looking for ways to leverage new forms of technology.
From investment in cloud computing to the advancement of machine learning, there are opportunities and lessons to be learned and applied across industries, using energy as a blueprint.
The IoT: Data Collection & Organization
Why it’s trending: Data collection by sensors in connected devices, working in tandem with new machine-to-machine communications networks, has the potential to radically streamline existing workflow structures. Improved acquisition, new methods of data integration, and advanced analytics offer numerous possibilities to rethink, streamline and improve.
In the energy space: Data is a commodity, particularly in energy. Companies across the industry are using thermal imaging and sensors to collect data, organize and place it on IoT platforms. When in place, data is valuable in supporting analytics — ultimately cutting costs and improving efficiency.
Further implications: Seamless data connection and analysis allow companies to use machine learning and artificial intelligence. This means asset operators can take predictive measures, preemptively solving problems before they arise. For example, being able to anticipate and schedule maintenance and repairs leads to fewer failures and less downtime.
Why it’s trending: When information is stored in the cloud, companies can cut down on infrastructure and centralize important data. By using less hardware, cost of ownership declines and businesses benefit from no longer needing to manage their own data centers.
In the energy space: When utilities can limit their need for on-premise hardware, they’re able to harness greater processing power and more advanced analytics, ultimately evolving their business and becoming more cost-effective.
However, industrials need to trust new cybersecurity methods for protecting machine data before they move wholesale to the cloud.
Further implications: Cloud computing brings connectivity, scalability, and mobility to a variety of businesses. Investing in the cloud allows a move to a more dynamic business model, and most digital IoT platforms rely on the cloud. However, nascent edge-computing techniques may be set to disrupt cloud computing’s dominance.
Why it’s trending: Automation, particularly the kind that robotics and drones can provide, is already widely adopted across industries to improve a variety of processes. Now, when used in conjunction with newer technologies, robotics and drones can work smarter and add more value than ever before.
In the energy space: Automated mapping software is being used in environments like mines and oil rigs, contributing to machine learning and forecasting how systems will behave when exposed to different stresses. Drones are also being developed for tasks deemed too dangerous for human workers, like cleaning boilers.
Further implications: As automated technologies become more advanced, and machine vision commercializes, businesses will be able to deploy intelligent robotics and drones to greater effect, performing tasks that don’t require human engagement and eliminating unnecessary steps altogether. Optimizing workflows and prioritizing more complex tasks will add enormous value.
Why it’s trending: Aside from Bitcoin and the rise of cryptocurrencies, blockchain has other valuable applications. The use of digital ledgers allows for secure processing of real-time transactions and automated trading.
In the energy space: The digital ledgers that make up blockchains will make trading in energy more liquid, as well as enabling developers to raise capital
for projects. Blockchain could be instrumental in developing peer-to-peer electricity sales.
Further implications: Customers are more concerned about data security than ever, and, given the decentralized nature of blockchain, it could be how corporates and consumers get comfortable with sharing their private data for use in large AI algorithm development.
Virtual Reality & Augmented Reality
Why it’s trending: Although VR and AR may be most famous for their use in gaming, adoption has also been quick in the energy and industrials sector — particularly for employee training and other HR applications.
In the energy space: Many utilities are using AR to equip field workers with more information and context. VR’s use in training employees for difficult or dangerous tasks is now widespread across many leading utilities.
Further implications: Both virtual reality and augmented reality can be leveraged for a variety of uses, from mock interviews and conflict resolution to worker automation for improved productivity and safety.
How to grow the space for
women in FinTech.
Women continue to be underrepresented in the technology space – but perspectives are changing.
Advice from the experts
Despite efforts to diversify, women remain underrepresented in the technology space — particularly in coding and upper management. Prioritizing diversification has been an uphill battle with so few women in the field to advocate for representation.
This key challenge, and how to tackle it, was among the topics discussed at Bloomberg’s Women in FinTech event in New York last year. Panelists, all high-ranking women in financial technology, offered some thoughts and advice on what needs to be done, as well as how they got to their roles today.
The good news is that many of the hurdles faced by women in financial technology are cultural expectations and those perspectives are changing, according to the panelists.
Helen Altshuler, Senior Engineering Leader at Google, who grew up in Ukraine, said she did not experience the concept of, male-versus female-dominated fields, until arriving in the U.S.
“Women were construction workers, factory directors, astronauts and scientists and anything else you can imagine [in Ukraine]. There are also a lot of women leaders around the world,” she said. As women take on a wider variety of roles in different arenas and with the rise of globalization, the concept of women and their capabilities may change.
Prepare future leaders by building a community today
On practical ways to improve the male-female ratio in the technology field, the panelists went on to point out the importance of starting early and encouraging girls to pursue their interest in science and technology with their friends and peers.
Altshuler gave a prime example. As a mother to both a son and daughter, she has made efforts to expose both her children to programming and technology. But while her son is thriving and says he wants to be an engineer, her daughter says she would rather be a dancer.
This is a problem that continues to plague the current-day workplace. Because technology, and FinTech in particular, doesn’t have a strong female community, it can be difficult to attract more women to the field.
Programs that encourage women to get involved in technology are key to creating that community and bringing more girls and women into the fold, panelists said.
An active mentor to other women and girls now, Altschuler encourages women to focus on the work. “I didn’t realize I was the only woman in the room for years. It didn’t matter because we’re all people. On a day to day basis, just push forward because you know what you bring to the table,” she said.
Rely on own capabilities to take on new opportunities
The panelists also encouraged event attendees to foster their individual capabilities and seize opportunities.
Sandra Behar, Director of Anti-financial Crime Risk and Controls, Regional Head of Deutsche Bank, advised participants to check in with themselves and make sure they love what they’re doing. “We spend so many hours at the office — more than what we spend at home or with our families – so we should be doing something that we like,” she said. “Unfortunately, sometimes as women, we have to prove ourselves more than men,” she added.
"There are so many positions in my world where the global heads are women because they are detail-oriented and can multi-task, and these are roles where that’s what is needed. Seize the opportunity and don’t be afraid to ask questions.”
Sandra Behar, Director of Anti-financial Crime Risk and Controls, Regional Head of Deutsche Bank.
Another panelist, Colette Garcia, Head of Service Delivery at Bloomberg, emphasized not shying away from new challenges.
She related her own experience moving to Hong Kong for a job, even though she had never visited Asia.
“To say I was terrified was an underestimation,” she said. “There are 40-50 examples of moments that were similar to that, where I really wanted to say no but then there’s a part of me that says ‘sure, try it.’ It might not all work out, but you learn so much and just embracing and having had those experiences is definitely a key factor,” she said.
Recognize the value of perspective
While new opportunities can help women build experience, women can add value to less diverse spaces by bringing a new perspective to the team and the work. New perspectives help bring out better results for everyone involved.
“Lack of diversity and ideas constrain people,” said Gerard Francis, Head of Enterprise Solutions at Bloomberg, comparing his experiences working in a very diverse office in Singapore to a homogenous office in Tokyo. “Every time we had a diverse team, that team dramatically outperformed those that were homogeneous,” he said.