Using Cloud-Based, GPU-Accelerated AI for Algorithmic Trading - HPCwire

2022-09-24 10:11:35 By : Ms. susan wei

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Financial institutions such as banks, hedge funds, and mutual funds use quantitative analysis to make stock trades. An Investopedia article indicates, “Quantitative trading consists of trading strategies based on quantitative analysis, which rely on mathematical computations and number crunching to identify trading opportunities. Price and volume are two of the more common data inputs used in quantitative analysis as the main inputs to mathematical models.”

It is critical for financial services organizations to stay ahead of the competition and maintain maximum profitability when stock trading. To meet this goal, financial firms develop their own algorithmic trading models which are considered protected intellectual property that is not shared. The trading models use computers to analyze a mix of proprietary data, statistical and risk analysis, and external data.

Trading strategies were traditionally developed by financial quantitative analysts (quants) using ‘what if rules’ to determine the best and most profitable trading opportunities. Once the trading strategies were refined, the trading criteria was hard coded into computer programs used in making real-time stock market trades. Trading programs were often run from financial services data center computers using central processing units for the computation. The massive amounts of data to be processed placed a strain on data center infrastructure. In addition, quantitative analysts could not keep up with the analysis required to update their trading models to reflect the constantly changing market and economic conditions. Algorithmic trading was created to help financial service organizations meet today’s fast paced stock trading needs.

Algorithmic trading is a method of executing orders using automated pre-programmed trading instructions accounting for variables such as time, price, and volume. This type of trading attempts to leverage the speed and computational resources of computers relative to human traders.

Financial services firms are increasingly building highly automated algorithmic trading systems using artificial intelligence (AI) for quantitative trading analysis. According to SG Analytics, “Algorithmic trading accounts for nearly 60 – 73% of all US equity trading – data analytics in the stock market.”

Algorithmic trading involves building unique computer models which find patterns or trends that are not typically perceived by humans scanning charts or ticker (price) movements. The algorithms use quantitative analysis to execute trades when conditions are met. A simple example would be, if the price of oil hits $130 and the US Dollar declines 5% over the previous two weeks, then sell Oil and buy Gold in a 20:1 Ratio. Mathematical statistics such as standard deviation and correlation would be added to the model to determine when to execute a trade.

Machine learning (ML) is especially valuable in algorithmic trading because ML models can identify patterns in data and automatically update training algorithms based on changes in data patterns without human intervention or relying on hard-coded rules. According to a Finextra article, “With the hiring of data scientists, advances in cloud computing, and access to open source frameworks for training machine learning models, AI is transforming the trading desk. Already the largest banks have rolled out self-learning algorithms for equities trading.”

The complexity and infrastructure requirements of algorithmic trading make it important for financial organizations to have partnerships with technology providers. Many of today’s algorithmic trading systems are powered by advances in GPUs and cloud computing.

Microsoft and NVIDIA have a long history of working together to support financial institutions by providing cloud, hardware, platforms, and software to support algorithmic trading. Microsoft Azure cloud, NVIDIA GPUs and NVIDIA AI provide scalable, accelerated resources as well as routines, and libraries for automating quantitative analysis and stock trading.

The partnership between Microsoft and NVIDIA makes NVIDIA’s powerful GPU acceleration available to financial institutions. Azure supports NVIDIA’s T4 Tensor Core Graphics Processing Units (GPUs), which are optimized for the cost-effective deployment of machine learning inferencing or quantitative analytical workloads. The Azure Machine Learning service integrates the NVIDIA open-source RAPIDS software library that allows machine learning users to accelerate their pipelines with NVIDIA GPUs.

In addition to Microsoft Azure Cloud solutions, Microsoft also provides tools that help developers and quantitative analysts develop and modify trading algorithms.

Microsoft Research developed Microsoft Qlib which is an AI-oriented quantitative investment platform containing the full ML pipeline of data processing, model training, and back-testing—it covers the entire auto workflow of quantitative investment. Other features include risk modeling     , portfolio optimization, alpha seeking, and order execution.

Microsoft Azure Stream Analytics is a fully managed, real-time analytics service designed to analyze and process high volumes of fast streaming data from multiple sources simultaneously. Azure Stream Analytics on Azure provides large-scale analytics in the cloud. The service is a fully managed (PaaS) offering on Azure.

Patterns and relationships can be identified in information extracted from various input sources and applications. Financial institutions can create, customize, or train algorithmic ML trading models using the combination of SQL language and JavaScript user-defined functions (UDFs) and user-defined aggregates (UDAs) in the Azure Stream Analytics tool.

Financial institutions using legacy data centers can no longer keep up with the massive amounts of data and analysis required for today’s fast-paced stock trading. Algorithmic trading using AI and ML that don’t require human analysis are becoming the norm for stock trading. Microsoft and NVIDIA provide advanced hardware, cloud, AI, and software solutions for algorithmic trading to meet the needs of the digital age.

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect Dr. Pavel Lougovski, Pr. QC Research Scientist Tyson Jones, Doctoral researcher, University of Oxford

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.

Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.