Skip to main content

Author: Max Graupner, VP, Security & Audits

Cyber attacks on the supply chain, especially transportation, are a common occurrence and will increase in 2025. As a recent post in DarkReading outlines, businesses, especially small businesses, started to look to AI as the silver bullet to improve efficiencies and defend against ever-changing attacks. While AI plays an important role, it cannot (yet) by itself manage all cyber risks. 

To be better equipped, shippers should focus on key areas of the transportation partner’s cybersecurity program. While Standardized Information Gathering (SIG) security questionnaires can be a great start, they often miss important aspects. Below are some key areas to focus on when selecting or assessing your transportation partner. The outlined points can help assess cyber risk, especially when the transportation partner leverages AI. 

How do you measure the success of AI in your cyber program? 

While many security and transportation products and services have interjected AI into the mix, it sometimes is not clear what the benefits are. Often, the mere mention of AI is to imply that the product is “good” or “enhanced.” But AI comes with plenty of flaws and new risks in itself. If it sounds too good to be true, it probably is. According to IBM’s 2024 cost of a data breach report, the biggest gains of AI we saw in security were in threat detection and incident response. Apply that to transportation, specifically car transportation, where we have seen improvements in fraud detection, behavioral analysis, and order issues identification and resolution. 

Are you providing training to your employees, and are you measuring the success of the training program? 

A robust training program is a must nowadays. Understanding how your transportation partners deploy the training program, the contents of the curriculum, and how they measure the success of the program is becoming increasingly important. A single “check” for compliance will no longer suffice. The threat landscape is changing fast, and with AI, we are not only increasing development speed but also the need to keep up with how to test, secure, and fix new technologies and products. All that starts with knowledge and empowering employees with the right skill set to get it done. 

What methods do you use to test the security of AI in your product? 

As transportation products and services infuse more AI, they should have a framework for assessing and testing AI. A transportation partner should be able to articulate how they test AI in their products. As mentioned earlier, AI brings benefits and speed but also comes with its own risks. OWASP, a leading nonprofit foundation to help secure software, has a project specific to large language models (LLMs) which aims to “educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models.” Other organizations such as STAR (Standards for Technology in Automotive Retail) have started to issue guidance on assessing cyber risks related to dealership and vendor compliance. 

By focusing on the aspects of measuring AI’s success, assessing and improving employee training, and testing AI in products and services, shippers can gain assurance beyond the standard questionnaires to understand what the risks, especially around artificial intelligence, are with their transportation partners.

A Trusted Transportation Partner in RunBuggy

RunBuggy has a dedicated, in-house Security and Data Science team that works together to set the AI benchmark in automotive shipping. By leveraging industry-leading frameworks such as OWASP Top 10 for Large Language Model Applications and collaborating with industry standard setters like STAR, RunBuggy is dedicated to deploying AI safely and securely. 

Sources: 

Dark Reading | IBM | OWASP | STAR