Strider Technologies has raised $55 million in a Series C funding round to strengthen its AI capabilities and fuel global expansion efforts. The money will enhance the company’s AI-driven insights, support business with government agencies, and fuel international expansion in Europe and Asia.
The Irish data regulator launched an investigation to determine Google's compliance with a European privacy law when it was developing its PaLM 2 artificial intelligence model. Google launched the multilingual generative AI model last year.
Welcome to Information Security Media Group's Black Hat and DEF CON 2024 Compendium featuring latest insights from the industry's top cybersecurity researchers and ethical hackers, as well as perspectives from CEOs, CISOs and government officials on the latest trends in cybersecurity and AI.
The U.S. federal government is preparing to collect reports from foundational artificial intelligence model developers, including details about their cybersecurity defenses and red-teaming efforts. The Department of Commerce said it wants thoughts on how data should be safety collected and stored.
The underground market for illicit large language models is a lucrative one, said academic researchers who called for better safeguards against artificial intelligence misuse. "This laissez-faire approach essentially provides a fertile ground for miscreants to misuse the LLMs."
A three-month-old startup promising safe artificial intelligence raised $1 billion in an all-cash deal in a seed funding round. Co-founded by former OpenAI Chief Scientist Ilya Sutskever, Safe Superintelligence will reportedly use the funds to acquire computing power.
The Dutch data regulator is the latest agency to fine artificial intelligence company Clearview AI over its facial data harvesting and other privacy violations of GDPR rules, joining regulatory agencies in France, Italy, Greece and the United Kingdom.
While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.
Unifying fragmented network security technology under a single platform allows for consistent policy application across on-premises, cloud and hybrid environments, said Palo Alto Networks' Anand Oswal. Having a consistent policy framework simplifies management and improves security outcomes.
HackerOne has tapped F5's longtime product leader as its next chief executive to continue expanding its portfolio beyond operating vulnerability disclosure programs. The firm tasked Kara Sprague with building on existing growth in areas including AI red teaming and penetration testing as a service.
AI companies OpenAI and Anthropic made a deal with a U.S. federal body to provide early access to major models for safety evaluations. The agreements are "are an important milestone as we work to help responsibly steward the future of AI," said U.S. AI Safety Institute Director Elizabeth Kelly.
California state lawmakers on Wednesday handed off a bill establishing first-in-the-nation safety standards for advanced artificial intelligence models to their Senate counterparts after weathering opposition from the tech industry and high-profile Democratic politicians.
ISMG's Virtual AI Summit brought together cybersecurity leaders to explore the intersection of AI and security. Discussions ranged from using AI for defense to privacy considerations and regulatory frameworks and provided organizations with valuable insights for navigating the AI landscape.
Cisco announced its intent to acquire Robust Intelligence to fortify the security of AI applications. With this acquisition, Cisco aims to address AI-related risks, incorporating advanced protection to guard against threats such as jailbreaking, data poisoning and unintentional model outcomes.
Microsoft says it fixed a security flaw in artificial intelligence chatbot Copilot that enabled attackers to steal multifactor authentication code using a prompt injection attack. Security researcher Johann Rehberger said he discovered a way to invisibly force Copilot to send data.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing fraudtoday.io, you agree to our use of cookies.