Seagate’s Data Pulse study identified various areas within an organisation that will pose challenges in an organisation’s AI journey, such as the lack of a clear strategy, lack of adequate infrastructure or budgets, or the lack of appropriate organisational talent. To maximise the potential of AI in their journey of adoption, it is important for organisations to overcome these challenges.

Strong data security is one such critical element. 94% of organisations agree that data security is critical for AI, according to the study. Yet alarmingly, more than 10% do not update and review their data security strategy regularly.

Data has become the new currency, but without security behind it, incidents such as data breaches can put companies out of business or put someone’s life at risk.

To help organisations catch-up and address your questions around this aspect of AI, here are some insights and useful tips from our online panel of experts:


  1. In what ways will AI necessitate a change in my organisation’s approach to security?

BS Teh, Seagate

Data usage is increasingly being analysed by its level of criticality, as indicated by factors such as the severity of consequences should the data become unavailable. For example, data for a medical operation would be considered more consequential than a streaming television show.

By 2025 almost 90% of all data created in the global datasphere will require some level of security, but less than half will be secured, according to Seagate-sponsored IDC whitepaper ‘Data Age 2025’. Nearly 20% of all data will be critical to our lives, of which 10% will be hypercritical.

This means that as our lives become more dependent on and intertwined with data, breaches in such life-critical data can have serious consequences – putting companies out of business or someone’s life at risk.

This criticality also means that data security must be the entire organisation’s responsibility, and not just the concern of IT and security professionals.

Yet although businesses seem to agree in the importance of data security, there is still a gap in secure practices. More than 1 in 10 organisations still do not update and review their data security strategy regularly, according to our Data Pulse study.

In the age of AI, data is power. It’s clear that organisations must pay more attention to data security as the implementation of AI technology grows to ensure safe and seamless flow of information and data. A solid data infrastructure will be a strong foundation for their AI journey, not just to handle the exponential growth in data volumes, but also ensure data is always protected from unauthorised access.

Akash Bhatia, Infinite Analytics

Given that the bedrock of AI is data – reams and reams of data – data security becomes paramount. With that in mind, you need to start with securing the source of data.

If there are multiple sources of data, each source will have its unique challenges pertaining to security. For example, if one of the sources is data from a retailer’s point of sale (POS), how do you ensure that the POS system does not have a virus or a Trojan in place? Or if the data source is from social media, how does one ensure that all the rules of data security have been adhered to, to prevent a ‘Cambridge Analytica’ incident? These could be extreme examples, but the truth is data breaches could impact any organisation.

The biggest effort would be, therefore, to ensure that – whether it’s using cloud or on-premise infrastructure – the organisation will have to adopt a “data first” strategy, and ensure its security. Pretty much every organisation today will have to get to this, sooner, rather than later.

Andrew Burgess, APD

Assuming organisations will increase their reliance on external cloud services in order to handle the scale of data and processing resources required to build the AI models, the starting point will be to understand the security at rest and in-transit capabilities of the service providers.

As more of these providers prove that they are “safe” and compliant natively (e.g. at the Security Operations Center level), compliance and best practices will be the responsibility of the organisation and its people accessing these environments having expert operating knowledge.

It could be possible for the emergence of an entirely new type of man-in-the-middle attack (one that covertly alters the data), knowing that automated events and actions might become solely reliant on decisions derived from an AI model data output.

Imagine if an AI model is predicting demand and systems are automatically scaling up and down to meet these predictions in demand. As an example, think of power generators which might be reacting to supply side or demand side signals from a model triggering actions locally that have network-wide effects.

If you could impersonate the AI model’s results with “fake” outputs, then the machine could be manipulated into making automated decisions with disastrous consequences i.e. incorrectly responding to what it reads as a fluctuation in electrical currents. It may also result in a less serious event, but still send out confusing communications and weaken trust in institutions.

An example of the latter happened in Hawaii, when a statewide inbound missile alert was triggered, causing panic due to inadequate supervision of the automated systems or understanding of how processes were connected to together.


  1. What are the top data security risks/challenges for organisations adopting AI in a cloud environment? What can organisations do to address them?

Johnny Chou, Viscovery

One of the key security challenges is the risk of data being stolen during the process of uploading and downloading data for AI interpretation.

To minimise this risk, organisations should consider data encryption and adopt API authentication mechanisms to bolster their data security. We also encourage conducting regular security audits.

Another significant risk occurs when AI is used on a cloud platform across the organisation to support decision-making or improve operational processes. If an abnormality occurs on the cloud platform, services are likely to be affected on a large-scale basis, and negatively impact the organisation’s day-to-day operations.

To minimise potential data failure, we recommend partitioning the data by regions and storing the data in a distributed fashion.

The third security risk during AI implementation is that data may be stored overseas – especially relevant in this digital age without borders.

So when deploying AI on a public cloud platform, organisations should consider the question of “where all their data are stored”. This is particularly important for industries such as financial services, where national regulations may necessitate that data is stored locally rather than overseas.


  1. When operating an AI system, how do I balance the need to have a responsive and flexible system with the vital need to keep the entire organisation’s data secure?

Akash Bhatia, Infinite Analytics

I don’t think there is an either/or relationship here. Speed and flexibility do not mean that safety is compromised.

Data integrity has to be paramount. If that impacts speed, then perhaps adding more servers/VMs to serve the purpose of speed should be explored.


  1. Operating an AI system also means having to manage a lot more data. What are some best practices that you can share to keep organisational data safe?

BS Teh, Seagate

From an organisational standpoint, it’s important to host regular meetings with the IT management team to determine possible areas of concern or vulnerability.

There needs to be a regular inspection of data assets to ensure that there is adequate protection for data. The company’s IT and data security policy – BYOD, network security, internet access, remote access – needs to be reviewed and updated on a regular basis.

It is also important to ensure storage solutions are equipped with enterprise-level security features, such as Self-Encrypting, Instant Secure Erase and Secure Download & Diagnostics.

Lastly, make it a point to have software and hardware updates at regular intervals.

From an infrastructure standpoint, it is clear that aspects such as encryption, sophisticated access control, and advanced data protection strategies are key to ensuring data is always available and protected from unauthorised access.

As a leader in data storage and a founding member of the Trusted Computing Group (TCG), Seagate has been a pioneer in securing data at rest and remains a driving force in the creation and adoption of data security standards, offering the industry’s widest selection of standards-based self-encrypting storage.

By combining Seagate Secure with management software, enterprises and end users can implement a full data at rest encryption solution with multiple capabilities

In addition, cloud storage remains an attractive solution for many organisations due to the scalability, cost effectiveness and convenience it offers.

If your business is using a cloud or hybrid cloud storage model, you need to implement data security solutions just as you would for your on-premise storage.

While the security controls you choose will be partly dependent on your data usage and business needs, there are considerations such as end-to-end encryption, firewalls, operational controls, and security that should be applied to your cloud security solution.

Andrew Burgess, APD

Large datasets, especially in their raw form required to build and tune AI models, are inherently safer (as long as certain information is obscured) than smaller datasets – due to the specialist skills and tools required to infer or understand the big datasets as they may relate to an individual or business secrets.

Perhaps the bigger problem would be compliance with data governance rules such as the EU’s GDPR.

It might be difficult to know where data derivatives, created through data and model processing of raw datasets, are residing and what controls surround these data.

Although the primary data may be safe, the derived data product is inadvertently vulnerable or in breach of compliance requirements.

Therefore, it may be a worthwhile exercise for organisations to create policies and practices (which can be subject to audit), instead of having to deal with these issues at a time of crisis brought on by a breach.

Johnny Chou, Viscovery

First of all, a well-established AI system requires continuous data collection, training, and optimization, and these data must also be classified and managed. The data used for ‘training’ does not have to be saved on the system once the training is done, and could be stored separately to avoid being hacked.

Secondly, at times when the implementation of your AI system is outsourced, data needed for AI model training is usually provided to the vendor. In this case, the scope and restrictions on the use of data must be clearly stated on IT policy and contract to avoid data being leaked or important information that are not related to AI being taken out.

Last but not least, employees must be aware of the security concerns that come with AI implementation so that they could avoid giving out data that are not encrypted or under protection to external staff.


Leave A Comment