Security

Securing AI-Powered Banking: A Guide to Ensuring Privacy and PII Security Published by Info-Tech-Research Group – PR Newswire


In the firm’s latest research, Info-Tech highlights essential guidance for banks grappling with the complexities of integrating AI while safeguarding personally identifiable information (PII). With a focus on actionable strategies and insights tailored to the banking sector, Info-Tech’s insights will empower institutions to effectively navigate the evolving landscape of AI risks and compliance challenges.

TORONTO, March 20, 2024 /PRNewswire/ – The integration of artificial intelligence (AI) into banking operations has brought unprecedented advancements to the industry, such as enhancing customer experience, risk management, and operational efficiency. However, these innovations also have their own sets of challenges, particularly in protecting personally identifiable information (PII). With AI algorithms processing vast amounts of sensitive customer data, the banking sector faces a heightened risk of breaches and regulatory non-compliance. Info-Tech Research Group’s latest research blueprint, Protecting PII When Using AI in Banks, addresses this critical issue, offering a strategic roadmap to secure PII within AI-enhanced banking environments.

The firm’s comprehensive research offers IT leaders actionable insights and best practices, emphasizing the responsible and secure integration of AI technologies in the financial sector. By using effective methodologies outlined in the blueprint, banking institutions can navigate the complexities of data privacy and protection while leveraging AI’s transformative potential.

“Unlike other applications, the PII threats that arise from AI and ML come primarily from within the banks. This contrasts with most of the other data-related threats that the banks are used to encountering, which are primarily external,” says David Tomljenovic, principal research director at Info-Tech Research Group. “The internal nature of the risks associated with AI and ML has meant that many banks are not well prepared. Preparing for internal risks is quite different from preparing for external threats.”

The new blueprint highlights a significant gap in bank employees’ understanding of how AI processes data, presenting new challenges in data security. The absence of formal acceptable use policies specific to AI technologies can leave banks vulnerable, as existing data security protocols are unprepared to manage AI’s emerging complexities. These challenges underscore an urgent need for the banking sector to develop new processes and applications to safeguard PII and ensure ongoing compliance.

“The greatest challenge with using AI in banking is that every employee in the bank has the potential to expose PII while experimenting with or using AI or ML tools,” says Tomljenovic. “Unless there is a well-defined threat, the banks must assume that everyone is a potential risk, regardless of whether the threat is intentional or accidental. The outcome in both cases is the same.”

Info-Tech emphasizes the need for banks to implement several measures to enhance PII security and deepen employee AI knowledge. These essential measures include training on AI and its use to increase awareness of its potential risks and to foster a culture of data security. Additionally, banks are advised to scan their existing systems and data to locate PII, followed by encrypting or tokenizing this sensitive information to enhance its protection. This approach not only helps to secure data but also cultivates a more informed and vigilant workforce.

Info-Tech’s blueprint stresses the importance of securing PII collection at every step, including within bank-controlled environments and the broader public sector. In the blueprint, the firm outlines the following key considerations for IT leaders in the banking sector:

  • Internal Threats: The primary risk to a bank’s PII increasingly comes from internal sources. While banks historically focused on external threats, the advent of AI and ML has reversed the risk environment.
  • Zero-Trust Approach: Banks should adopt a zero-trust approach to protect PII, assuming no resource access restrictions. This approach ensures that the PII remains private even if someone gains access and uploads data into an AI/ML tool.

Deploying critical strategies to safeguard PII is important, especially amid the adoption of AI and ML technologies. Despite their benefits, these technologies also introduce complex challenges, notably in data management and control. The research highlights concerns around the ability to permanently retract or delete data within AI or ML systems, which could potentially lead to unintended data exposures. As a result, the firm advises that banks must promptly take action to implement strategies to protect PII effectively.

For exclusive and timely commentary from David Tomljenovic, an expert in the financial sector, and access to the complete Protecting PII When Using AI in Banks blueprint, please contact [email protected].

About Info-Tech Research Group

Info-Tech Research Group is one of the world’s leading information technology research and advisory firms, proudly serving over 30,000 professionals. The company produces unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. For more than 25 years, Info-Tech has partnered closely with IT teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

Media professionals can register for unrestricted access to research across IT, HR, and software and hundreds of industry analysts through the firm’s Media Insiders program. To gain access, contact [email protected]

For information about Info-Tech Research Group or to access the latest research, visit infotech.com and connect via LinkedIn and X.

SOURCE Info-Tech Research Group





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.