Cybersecurity experts with FBI and Microsoft experience visit DTX to share thoughts on tackling AI crime – and whether the tech matches the hype
You need 11 Lionel Messis on your team to keep secure in a world of AI-powered attackers – so experts with experience at the FBI and Microsoft have told a massive cybersecurity conference in Manchester.
DTX + UCX Manchester, which bills itself as “the North’s leading business transformation event”, has taken over Manchester Central this Wednesday and Thursday as part of Manchester Tech Week. Speakers from the UK and beyond have been speaking to tech-focused delegates on subjects from cybersecurity to automation and AI.
The keynote speakers on day one of DTX talked about how artificial intelligence (AI) and machine learning (ML) could help to stop cyber crime. That’s particularly timely given the reported development of tools such as Claude Mythos, the prototype tool that developer Anthropic claims can outperform humans at some cybersecurity tasks and even detect decades-old bugs in key software systems.
Howard Marshall, former deputy assistant director at the FBI’s cyber division, introduced himself by talking about how he spent decades at the FBI before joining Accenture to work in cyber threat intelligence and cyber security. He said it had taken him time to adjust to life away from law enforcement – but said he had learned “you don’t have to have a badge and a gun to help secure the world. The data we have access to is endless.”
Howard observed there had been “a lot of noise” around AI and what it can actually do, adding: “I’m not sure the general public… has been able to bifurcate what is the noisy part and what is the practical part.”
Kelly Bissell had decades of tech expertise at companies including Accenture and Deloitte before joining Microsoft as corporate vice president for tackling fraud and product abuse. He said he had spent 30 years in cybersecurity but that “the last four years at Microsoft had dramatically changed my thinking” on how to protect companies and the software market more widely from attacks.
Kelly shared some practical examples of how AI had changed in his time at Microsoft. He said he had been part of a “global ghost team” tracking fraud, meaning it had to explore 16 billion transactions a month. His team also had to help prevent misuse of Microsoft tools, such as by detecting deepfake calls on Teams or crypto mining activities on the cloud.
That means, he said, that he and his team had to be “in the depths of the engineering” to find cyber crime faster . Before AI that meant “a bunch of scripts”, different software and groups of contractors. But after AI, he said, things were “totally different” and more connected, with fewer software developers able to do more work more efficiently.
The lesson was that people had to use the right AI system for the right job, later explaining that might be Claude for code and Azure for data science. He said Microsoft had gone from “nascent to extremely mature” in its AI use in four years. Howard asked if there had been any internal resistance. Kelly said that while senior management from CEO Satya Nadella down had pushed AI, there had been some uncertainty among some employees.
He said, for example: “A software developer didn’t want to change up his or her tools and what they did every day. It was a change management problem.”
Kelly’s answer to that problem was blunt. He said people were expected to use AI and that if they did not, they had to leave the business. He talked about one great technician who had failed to adapt, and said: “As soon as I fired him, everybody got religion”.
Unsurprisingly, Kelly said there was “a bit of fear” about the impact of AI. But he insisted the team had been more effective once AI use was expanded.
He said: “It changed the way we work and the excitement of what we could accomplish.”
Last year, he said, Microsoft thwarted $4bn of fraud “and we couldn’t have done that without AI and ML – applied AI and applied ML.”
Later, the pair talked about the skills people needed to adopt in a world of AI. Kelly said that people in cybersecurity used to be expected to possess a range of deep technical skills in many areas – “A unicorn superman sort of thing”.
But now AI means people don’t need to be PhD experts in technical fields. He said: “This is one of the few technologies that’s actually empowering”.
When it comes to tackling fraud, particularly AI-powered fraud, the skills required these days are different. Cybersecurity specialists need, Kelly said, to think like their adversaries.
“The need today is creativity and imagination,” he said, “and being able to think outside the box. If I were a bad actor, what would I do to disrupt this thing?
“You can’t just be a general IT person. You have to know how the company makes money.”
And, he said, you have to have an “insatiable desire” to learn about cyber security and new tools as they are launched. Howard agreed, saying that from a law enforcement perspective the FBI has manuals for everything but that agents are also expected to use their imagination and creativity to solve cases.
Kelly said: “You can’t expect to win unless you have a great sparring partner”, and that cybersecurity pros had to have the imagination to “think like an attacker”.
Howard and Kelly agreed that while AI was a useful tool, it still had to be checked carefully – meaning humans still had to take the lead in cybersecurity. Howard pointed to instances in the US where lawyers had been caught out using AI, as their briefs included references to non-existent case law hallucinated by AI.
Kelly said people should analyse their AI model, not just accept its output, and that people should have a “professional scepticism” about those tools. People should also see the AI as a tool, and not be controlled by it – smiling at his accidental Microsoft pun, he said: “The human is the pilot and AI the copilot”.
And he added: “The worry is that people will acquiesce to what the tool says blindly, and I think that’s a mistake.”
Asked what’s next with AI, our American host reached for a football analogy – with Howard saying he’d been to watch Manchester United’s victory over Brentford on Monday. Kelly said that attackers and fraudsters were already using AI, so companies looking to protect themselves needed to adapt to that too.
He said: “If you haven’t embedded AI in your operations you’re in trouble. To go back to Manchester (United) … It’s like the other team has 11 players and you have three. You’re not going to win. Also, AI means all 11 are Messi players.”
Kelly talked about an example from his Microsoft days of attackers attempting “phishing” scams to trick people into revealing sensitive information. He said a new wave of attackers was able to use AI tools to scrape social media and target the phishing to individuals “at a scale of millions”.
He added: “If you want to combat them you’re going to have to use the same level of precision in your own operations.”
There was, he said, an “arms race” between attackers and defenders. And he said if you need to keep your business secure, you “should “put 11 Messis on your pitch”.
#Messis #gun #badge #world #safe #fraudsters


