Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Fraud costs and attacks are on the rise in the financial services industry. Traditional methods to combat fraud either detect fraud after it happens or cannot keep pace with the volume and sophistication of the attacks. A better approach is to employ real-time fraud detection and prevention. The computational requirements of such an approach can be met with cloud-based, GPU-accelerated artificial intelligence.
Real-time action to prevent fraud is becoming increasingly necessary due to fraud’s growing impact on financial services institutions. A LexisNexis True Cost of Fraud Study: Financial Services & Lending released earlier this year found that the cost of fraud for U.S. financial services and lending firms has increased between 6.7% and 9.9% compared with before the pandemic. Every $1 of fraud loss now costs U.S. financial services firms $4.00, compared to $3.25 in 2019 and $3.64 in 2020.
One factor driving this growth is that more transactions are done via mobile apps. As the pandemic started, the FBI issued a warning that it expected that cyber actors would attempt to exploit mobile banking customers using various techniques, including app-based banking trojans and fake banking apps. The problem has gotten worse as the use of mobile banking increased significantly during the pandemic, and many people continue to use it now for its convenience.
Another conduit used to perpetrate fraud is to exploit different weaknesses throughout the customer journey. Everything from how users authenticate themselves to any of the numerous touchpoints a customer has with an institution can introduce exploitable vulnerabilities.
Financial services institutions are using artificial intelligence (AI) and machine learning (ML) to spot fraud in the making in real-time and prevent it from happening.
For example, most traditional fraud detection approaches use rules that flag suspicious transactions. The approach might look for online purchases from a suspicious location or a type of customer uncharacteristically spending above a certain level. Such rule-based platforms are inefficient because they rely on expected customer behavior, generating a large percentage of false-positive responses. An AI/ML approach can be trained on real customer behavior instead of relying on a set of rules.
Often such approaches can be complemented by focusing on the individual customer versus the collective expected behavior of similar customers. It learns every time a customer makes a purchase. It searches for activities and patterns to understand what that customer’s typical purchase behavior looks like to spot suspicious activity. It is also being applied to other areas of financial crime such as in the fight against money laundering. For example, AI-based Know Your Customer (KYC) frequently provides additional insights that improve visibility into potential risks associated with financial crimes.
Training ML models and running AI for fraud detection and prevention requires huge computational resources. Workloads can greatly benefit from elastic and scalable cloud-based, GPU-accelerated resources running optimized AI/ML algorithms, routines, and libraries. Marrying the right cloud and GPU technologies can provide the requisite scalability, faster and more efficient detection, and increased accuracy.
Assembling the various compute and software elements needed to do AI-based fraud detection at the scale of high-volume transactions major financial institutions experience every day is a complex task. Many organizations often do not have the time, money, or skills to undertake a modern fraud detection effort. They, therefore, need to find partners with the right technology and deep industry-specific AI expertise.
Microsoft and NVIDIA have been working together for years in the AI/ML arena. Their partnership in this arena goes back many years with the aim of infusing NVIDIA GPU technology on Azure to speed up entire AI/ML pipelines. And much of the technology, best practices, and methodologies they have jointly developed can be applied to fighting fraud.
This partnership has brought many innovations to market that make GPU acceleration available to more developers and businesses interested in using AI/ML. They offered Azure Machine Learning service as the first major cloud ML service to integrate RAPIDS, an open-source software library from NVIDIA. That allowed traditional machine learning users to easily accelerate their pipelines with NVIDIA GPUs. They also integrated the NVIDIA TensorRT acceleration library intoONNX Runtime. That enabled deep learning users to speed inferencing.
Work in these areas has continued. Last year, Azure announced support for NVIDIA’s T4 Tensor Core Graphics Processing Units (GPUs), which are optimized for the cost-effective deployment of machine learning inferencing or analytical workloads.
The true strength of the partnership is that the technologies are tightly integrated and optimized. For example, libraries are used to perform certain tasks to efficiently use GPUs. Installing and configuring these libraries takes time and effort. Azure takes care of pre-installing these libraries and setting up all the complex networking between compute nodes through integration with GPU pools. Additionally, by collaborating, NVIDIA and Azure have developed optimal configurations for GPU-accelerated AI workloads. That saves companies time and operational costs.
Tying this back to fraud detection, the cloud-based, GPU-accelerated artificial intelligence offered by Microsoft Azure and NVIDIA makes the needed computation resources available to institutes that want to adopt modern real-time fraud detection and prevention. And they get all the benefits of running AI in the cloud including the ability to scale up and out, security, fact interconnections, and more. In this way, such organizations can use the most sophisticated AI/ML-based approaches to better protect the institution’s and their customer’s assets.
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Prize, of course, comes with an award of $10,000 courtesy of H Read more…
HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…
The stunning images produced by the James Webb Space Telescope and recent supercomputer-enabled black hole imaging efforts have brought the early days of the universe quite literally into sharp focus. Researchers from th Read more…
A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…
Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…
This post was contributed by Matt Broadfoot, Senior Fire Strategy Manager at Amazon Design and Construction, and Antonio Cennamo ProServe Customer Practice Manager, Colin Bridger Principal HPC GTM Specialist, Grigorios Pikoulas ProServe Strategic Program Leader, Neil Ashton Principal, Computational Engineering Product Strategy, Roberto Medar, ProServe HPC Consultant, Taiwo Abioye ProServe Security Consultant, Talib Mahouari ProServe Engagement Manager at AWS. Read more…
Financial services organizations face increased competition for customers from technologies such as FinTechs, mobile banking applications, and online payment systems. To meet this challenge, it is important for organizations to have a deep understanding of their customers. Read more…
Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over the course of the legislative process broadened to include hundreds of billions in additional science and technology spending. He was flanked by Speaker... Read more…
HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…
A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…
Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…
Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over the course of the legislative process broadened to include hundreds of billions in additional science and technology spending. He was flanked by Speaker... Read more…
The combined stressors of Covid-19 and the invasion of Ukraine have sent every major nation scrambling to reinforce its mission-critical supply chains – including and in particular the semiconductor supply chain. In the U.S. – which, like much of the world, relies on Asia for its semiconductors – those efforts have taken shape through the recently... Read more…
D-Wave Systems, one of the early quantum computing pioneers, has completed its SPAC deal to go public. Its merger with DPCM Capital was completed last Friday, and today, D-Wave management rang the bell on the New York Stock Exchange. It is now trading under two ticker symbols – QBTS and QBTS WS (warrant shares), respectively. Welcome to the public... Read more…
Lawrence Livermore National Laboratory (LLNL) is one of the laboratories that operates under the auspices of the National Nuclear Security Administration (NNSA), which manages the United States’ stockpile of nuclear weapons. Amid major efforts to modernize that stockpile, LLNL has announced that researchers from its own Energetic Materials Center... Read more…
Back in June, the EuroHPC Joint Undertaking – which serves as the EU’s concerted supercomputing play – announced its first exascale system: JUPITER, set to be installed by the Jülich Supercomputing Centre (FZJ) in 2023. But EuroHPC has been preparing for the exascale era for a much longer time: eight months... Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…
In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…
The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…
The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…
The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…
Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…
PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…
HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…
AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…
Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…
The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…
Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…
You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…
Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…
Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.