Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
No matter the industry, organizations are managing huge amounts of data: customer data, financial data, sales and reference figures–the list goes on and on. And, data is among the most valuable assets that a company owns. Ensuring it remains secure is the responsibility of the entire organization, from the IT manager to individual employees.
However, the rapid onset of generative AI tools demands an even greater focus on security and data protection. Using generative AI in any capacity is not a question of when for organizations, but a must in order to stay competitive and innovative.
Throughout my career, I’ve experienced the impact of many new trends and technologies firsthand. The influx of AI is different because for some companies like Smartsheet, it requires a two-sided approach: as a customer of companies incorporating AI into their services that we use, and as a company building and launching AI capabilities into our own product.
To keep your organization secure in the age of generative AI, I recommend CISOs stay focused on three areas:
Transparency
One of my first questions when talking to vendors is about their AI system transparency. How do they use public models, and how do they protect data? A vendor should be well prepared to disclose how your data is being protected from commingling with that of others.
They should be clear about how they’re training their AI capabilities in their products, and about how and when they’re using it with customers. If you as a customer don’t feel that your concerns or feedback are being taken seriously, then it could be a sign your security isn’t being taken seriously either.
If you’re a security leader innovating with AI, transparency should be fundamental to your responsible AI principles. Publicly share your AI principles, and document how your AI systems work–just like you would expect from a vendor. An important part of this that is often missed is to also acknowledge how you anticipate things might change in the future. AI will inevitably continue to evolve and improve as time goes on, so CISOs should proactively share how they expect this could change their use of AI and the steps they will take to further protect customer data.
Partnership
To build and innovate with AI, you often need to rely on multiple providers who have done the heavy and expensive lift to develop AI systems. When working with these providers, customers should never have to worry that something is being hidden from them and in return, providers should strive to be proactive and upfront.
Finding a trusted partner goes beyond contracts. The right partner will work to deeply understand and meet your needs. Working with partners you trust means you can focus on what AI-powered technologies can do to help drive value for your business.
For example, in my current role, my team evaluated and selected a few partners to build our AI onto the models that we feel are the most secure, responsible, and effective. Building a native AI solution can be time consuming, expensive, and may not meet security requirements so leveraging a partner with AI expertise can be advantageous for the time-to-value for the business while maintaining the data protections your organization requires.
By working with trusted partners, CISOs and security teams can not only deliver innovative AI solutions for customers quicker but as an organization can keep pace with the rapid iterative development of AI technologies and adapt to the evolving data protection needs.
Education
It’s crucial that all employees understand the importance of AI security and the risks associated with the technology in order to keep your organization secure. This includes ongoing training for employees to recognize and report new security threats by coaching them on appropriate uses for AI in the workplace and in their personal use.
Phishing emails are a great example of a common threat that employees face on a weekly basis. Before, a common recommendation to spot a phishing email was to look out for any typos. Now, with AI tools so easily available,bad actors have upped their game. We are seeing less of the clear and obvious signs that we had previously trained employees to look out for, and more sophisticated schemes.
Ongoing training for something as seemingly simple as how to spot phishing emails has to change and develop as generative AI changes and develops the security landscape overall. Or, leaders can take it one step further and implement a series of simulated phishing attempts to put employee knowledge to the test as new tactics emerge.
Keeping your organization secure in the age of generative AI is no easy task. Threats will become increasingly sophisticated as the technology does. But the good news is, no single company is facing these threats in a vacuum.
By working together, knowledge sharing, and focusing on transparency, partnership, and education, CISOs can make huge strides in the security of our data, our customers, and our communities.
About the Author
Chris Peake is the Chief Information Security Officer (CISO) and Senior Vice President of Security at Smartsheet. Since joining in September of 2020, he is responsible for leading the continuous improvement of the security program to better protect customers and the company in an ever-changing cyber environment, with a focus on customer enablement and a passion for building great teams. Chris holds a PhD in cloud security and trust, and has over 20 years of experience in cybersecurity during which time he has supported organizations like NASA, DARPA, the Department of Defense, and ServiceNow. He enjoys biking, boating, and cheering on Auburn football.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Check us out on YouTube!