Taiwan bans DeepSeek AI and Meta warns its insider threats

Taiwan bans DeepSeek Chatbot of China

Taiwan has officially imposed a ban on the use of DeepSeek, an AI-powered chatbot developed by a Chinese startup, within government organizations and entities responsible for critical infrastructure. However, the restriction does not extend to private businesses or individuals operating in Taiwan (officially the Republic of China).

The decision was formally announced by Taiwan’s Ministry of Digital Affairs, citing concerns over potential data leakage and cybersecurity risks. The ministry highlighted the risks associated with transmitting sensitive information through AI platforms, particularly those developed in China, due to the possibility of unauthorized data access and misuse.

DeepSeek recently suffered a major cybersecurity incident when it was targeted by a Distributed Denial-of-Service (DDoS) attack on January 20th, which led to service disruptions for several hours. Following this, the company also experienced a data breach, raising alarm over its security protocols. Additionally, reports indicated that DeepSeek’s servers received a surge of fake web traffic originating from multiple foreign locations, including the United States, Australia, the United Kingdom, and Canada.

Global concerns about AI-driven data security have been increasing, with many nations questioning how AI firms handle user-generated data. One of the major concerns surrounding DeepSeek is that the company does not store data locally but rather transmits it to foreign servers, further escalating fears of potential misuse or unauthorized access. In response, several countries have advised their citizens against using DeepSeek’s services.

Taiwan’s decision to restrict the chatbot’s use in government entities aligns with similar moves made by Italy and the broader European Union, which have also taken precautions regarding AI platforms with uncertain data security frameworks.

Meta issues warning to Insider Threats

Meta, the parent company of Facebook, has issued a strong internal warning to its employees regarding data leaks, reinforcing its commitment to safeguarding sensitive company information. The company has made it clear that any employee found guilty of leaking proprietary information will face immediate termination.

Guy Rosen, Meta’s Chief Information Security Officer (CISO), addressed the issue in an internal memo, emphasizing that data security remains a top priority for the tech giant. With AI innovation being a key focus for Meta in the coming years, protecting research and development (R&D) efforts has become crucial. The memo underlined the company’s determination to maintain its competitive edge in AI science by preventing internal security breaches.

Insider threats have always been a significant concern for corporations, regardless of their size. Employees with access to sensitive information may be tempted to engage in unethical activities for financial gain, personal revenge, or corporate espionage. These actions can lead to serious consequences, including the theft of intellectual property, the creation of security vulnerabilities, and significant reputational damage.

As Meta continues to expand its AI initiatives, the company is expected to implement stricter security measures to prevent unauthorized data disclosures. The warning serves as a firm reminder that protecting sensitive information is not only a business priority but also a fundamental responsibility of all employees.

By taking a hard-line stance against insider threats, Meta aims to strengthen its internal security framework and ensure that its cutting-edge developments in AI remain safeguarded against both internal and external risks.

Ad
Join over 500,000 cybersecurity professionals in our LinkedIn group "Information Security Community"!
Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display