Tony Fyler, Author at TechHQ https://techhq.com/author/tonyfyler/ Technology and business Fri, 29 Dec 2023 16:14:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 How to combat the security risk of sleeping data https://techhq.com/2023/12/how-do-we-protect-against-data-security-risk/ Fri, 29 Dec 2023 16:14:55 +0000 https://techhq.com/?p=230924

• Keeping unnecessary data can be a security risk for companies. • A data security platform allows you to keep your eyes on your data. • Not knowing what’s happened to any of your data is unlikley to be acceptable much longer. We’ve been looking into the security risk of hording “sleeping data” – data... Read more »

The post How to combat the security risk of sleeping data appeared first on TechHQ.

]]>

• Keeping unnecessary data can be a security risk for companies.
• A data security platform allows you to keep your eyes on your data.
• Not knowing what’s happened to any of your data is unlikley to be acceptable much longer.

We’ve been looking into the security risk of hording “sleeping data” – data that no longer has value for an organization – with Terry Ray, SVP at Imperva. It’s a chronically underexplored and misunderstood phenomenon, because it stems from a logical presumption that keeping everything is inherently safer than letting it go – and then maybe, down the line, needing it again.

In Part 1, Terry outlined the thinking behind organizations holding on to data they think they might need one day, but which then, no one looks at for years or even decades, likening it to the junk drawer we all have in our homes.

In Part 2, we took a look at the scale of the problem, with a UK Imperva study that showed practically a record lost from organizations for every human being in the country, each year, over three years (2019-22).

As the scale of the security risk of sleeping data was becoming clear, we challenged Terry with the obvious question every business would want answered: now that we understand the problem, what can we actually do about it?

The two-thirds issue.

TR:

I think of cybersecurity in general and security risk in particular as a kind of soccer pitch.

THQ:

*Braces for sportsball talk.*

TR:

In soccer, your goalie is your last line of defence, the last security before (if you will) the attacker gets to the goal.

That is your data security.

But your goalie only has the ability to access one-third of the goal. The other two-thirds, they can’t go there. Result?

THQ:

*Blinks in techie.*

TR:

The result is you’re not going to win the game unless you tackle your security risk.

And that’s what we’re talking about here. When we say we’re only going to protect data that’s regulated or data that’s important to the business, that’s great. That’s one-third of the job. We ignore the other two-thirds of the job at our peril, that’s really what it comes down to.

THQ:

So, picking ourselves up from the sportsball references, how do we protect the other two-thirds of our data?

TR:

Well, I’m not going to be the one that sits here and says that applying data security controls is inexpensive. It’s not. But if a business has said “it’s really important that I collect all of this data,” then part of the cost of that decision is the need to protect that data. There’s a cost associated with protecting that data, and I think, unfortunately, for a lot of organizations, when they began collecting this data, that might not have been part of the equation.

But where we are today, that data is in demand by somebody. And that becomes part of the cost of storing that data and certainly part of the cost of collecting any additional data.

The point of a data security platform.

THQ:

So – show me the money?

TR:

Pretty much. Organizations need to readjust their cost model of holding on to this data, because they now have to de-risk having that data in their organization.

Now you could just say, I’m going to de-risk it by deleting everything more than five years old. And going forward, I’m going to put controls on everything. Great, that’s fine, go and do that.

If you’re not going to do that, then you need to reassess your security strategy, because right now, you have a goalie that can only reach a third of the way across the goal.

THQ:

So how do they get a better goalie?

TR:

They need a data security platform. What that entails will depend on the organization itself. The savvy organizations that have gotten this over the last 10 to 20 years have said, “I need to be able to determine where that data is. And what type of data I have out there.”

To mitigate a data security risk, you need a data security platform.

Got a data problem? Get a data security platform.

And if they don’t want to ask those two questions – and it’s totally fine not to – they at least need to be able to say “If anybody accesses that data, I want the ability to actively watch them do it.”

You need to know what data’s been touched, by whom, and to understand whether, say Terry, has ever touched that data before. Is it right that Terry’s touching that data, suddenly, after however long? Is that restricted data or sensitive data? And is that data that usually only apps touch, or does it make sense that Terry’s touching that data right now?

If you’re not actively watching the data, you cannot protect or classify anything.

Old ideas done new.

Data security platforms have the ability to monitor all data in the cloud, if the organization has gone that way, or all data in their Oracle MSSQL on-prem mainframe if they have them.

The goal of this strategy is to do the same thing for data security that organizations have already done for endpoint security – watch all of the endpoints. It’s already been done for network security – watch all of the networks.

A data security platform is a repetition of best practice cybersecurity – you have to have eyes on something to protect it. So put your eyes on your data.

Organizations know very well how to do this for their regulated data, because they’ve been doing it for regulators for a long time. They simply need to duplicate that effort for all of the rest of their data and stop thinking that if they’ve covered all their regulated data that they’re done with data security.

Data visibility is key – and it means being able to answer all those questions about what’s happening to you data, and whether that feels right.

Beyond that, mitigation is a whole other ball game.

But at least if you can answer those questions about what’s happening to your data, you have the intelligence to start making informed decisions. Who’s accessing the data? Are they allowed to do so? If yes, fine, if not, how are they doing it? Does that indicate a weakness in the system somewhere – and so on.

If you can answer all those questions about all your data, not only can you meet all of your regulatory requirements, you have that data for analytics, machine learning or whatever tool you have – meaning you might be able to get uses from it with new technology and analytics that you didn’t have back when you started storing it.

“I don’t know” is unacceptable.

We’re coming to a point where “I don’t know” shouldn’t be an acceptable answer when it comes to a company’s stored data.

I run regular CISO roundtables, I get a group of ten CISOs together, and the first question I ask is “Raise your hand if you know who is responsible for data security in your organization.”

Half of the room doesn’t raise their hand.

And then we go around the room, finding out why they didn’t raise their hand. There’s always the diplomatic answer, which is “It’s everybody’s job. Data security is everybody’s job.” Fine. It is. Yes, it’s everybody’s job. But what I’m asking is whose phone rings when someone says we lost a million records because a bug bounty came in? Whose phone rings, and who says “What did they take? When did they take it? What controls do we have to make sure that we monitored them or it’s not going to happen again?”

And then they all raise their hand – because it’s their phone. “So now let’s talk about the controls and the strategy that you have in place…”

Everybody put your hands up.

Everybody put your hands up.

That’s what organizations have to deal with every single day – knowing that there’ll be someone responsible for the security risk when the call comes, but not often having the factored-in budget or teams to respond with a pre-emptive data strategy.

If you have that strategy, that budget, and those teams in place to take data security seriously, you get CISOs who aren’t forced to make up their responses to crises when the crisis hits. Hopefully, they’ll have a data privacy officer to help protect their data, and respond in the event of that call coming in.

Whose head in on the line when the phone rings?

The post How to combat the security risk of sleeping data appeared first on TechHQ.

]]>
The scale of the sleeping data security risk https://techhq.com/2023/12/how-big-is-the-scale-of-security-risk-from-sleeping-data/ Thu, 21 Dec 2023 12:30:44 +0000 https://techhq.com/?p=230901

• Sleeping data can prove a significant security risk. • Most companies, one way or another, let their sleeping data lie. • 40% of the security risk to such data comes from insider threat. In Part 1 of this article, we spoke to Terry Ray, SVP of Imperva, about the corporate habit of retaining every... Read more »

The post The scale of the sleeping data security risk appeared first on TechHQ.

]]>

• Sleeping data can prove a significant security risk.
• Most companies, one way or another, let their sleeping data lie.
• 40% of the security risk to such data comes from insider threat.

In Part 1 of this article, we spoke to Terry Ray, SVP of Imperva, about the corporate habit of retaining every piece of data for the rest of time, on the principle that it would either be a security risk or a value risk to let any of it go.

That attitude though is clearly the result of poor data management strategy – if you don’t know what data you have, and you can’t ascribe it any inherent level of value, or if you don’t know who in your organization has responsibility for any piece of data, you strip the data of its corporate value and it becomes the equivalent of data fluff, something you feel you have to keep – potentially several times – without any hope of ever using it again to drive value for the business.

There’s a clear link there between data that’s kept beyond its limit of corporate usefulness and unjustifiable cost – data storage is not, on the whole, a cheap proposition these days, and if you have no idea how important the data you’re storing is, the human temptation is to treat everything like it’s of maximum importance and pay the fees for long-term storage.

But Terry went beyond the cost implications. By keeping unclassified, unexamined, and potentially valueless data for too long, he said companies were running into the potential of paying to create their own potential future security risk.

We needed to know more about the mechanics of that.

THQ:

So, storing all data, unexamined and uncategorized – sure, we can see the cost implication of that. But where’s the security risk? It may be valueless, but doesn’t that mean it’s just null, rather than negative?

TR:

No.

THQ:

Oh.

TR:

Value’s not a monolithic thing, in terms of the data. There are certain pieces of information that will live on and are still valuable – to someone. Not to the organization that first harvested it, sure, but that’s not the point. There are different data buckets involved here – organizations should be saving, and securing things like regulated data.

THQ:

Because there’s every chance those who regulate the data will at some point come knocking and need to see it?

TR:

Right. And that’s the point. If it’s regulated data, it should be secured, that’s just best practice – and it lowers the organization’s security risk. If you’re paying for security, you’re at least doing “more than nothing” to make sure your duty is done by the data. But like top level storage, there’s a cost implication to that. You do it because it’s best practice, and because you’re fairly sure someone will want to take a look at the data some day, so you have that duty of care to the data.

But who’s going to assume the same bottom line expense for unregulated data? How would you explain that to your board? “We also paid out a hefty sum to secure data that noooooooobody’s going to want to see, and that nobody’s looked at in years?”

THQ:

Ah.

TR:

“Ah” is right. There’s still data that’s unregulated but is critical to the functioning of the business. But if it’s critical, the chances are someone’s looking at it on a regular basis, so it identifies itself as that kind of data.

But then there’s the third kind of data. Data that’s not – at least any longer – vital to the running of the business.

That could be anything, depending on the nature of the business. Say you’re a mining company, and you’ve exhausted the potential of a particular drill site. All the data relating to that drill site gets put on the back burner, and as the business moves forward, is unlikely to be looked at on any regular basis.

But then maybe we come back to it 15, 20 years down the road, when there’s new technology that allows me to extract even more out of the site, and suddenly that data has organizational value again.

In the meantime though, it still has potential value to someone else – maybe government organizations, maybe higher-tech, smaller operations that can make use of it.

The common idea of data security risk is of a malicious individual or group.

When we think of data hackers, we tend to think of vastly sophisticated outside agents…
(Image generated by AI).

The security risk of “valueless” data.

Now, it’s important to say that these are not organizations for which it’s worth the time and the expense of stealing your historic data.

But if they were offered the data for free, or for a consideration that stopped them having to do, for instance, their own  deep geological surveys, they’d probably jump at that prospect.

And that’s the security risk. Data that’s not vital to the everyday running of an organization has significantly reduced value to the organization. That means the organization is unlikely to pay out for the kind of security systems that protect its regulated, or its daily-use business data.

Which makes it an easy target for those who might be able to monetize that data for themselves.

And if data from within an organization gets out onto the web, then we’re off to the races. Funnily enough, the data that’s not important to an organization on a daily basis becomes immediately much more important when there’s a data breach – because then they get a breach notification, and suddenly they have a world of headache and hurt, because a breach notification doesn’t say “Only notify us about data that’s important to me or to you or to somebody else,” it requires notification about all the data the company holds – and that becomes a problem for the organization in a very big hurry.

Unless you’ve applied consistent controls to all your data, that’s a big security risk.

THQ:

We were going to ask what the levels of risk were, in terms of having valueless data hanging around, potentially for decades, but that makes sense – if you don’t know its potential value to others, you don’t have the justification to take to the board to secure it, and spend the money on security it.

TR:

Right. Also, of course, plenty of organizations out there are still working in large networks with all these sub-networks and data silos. There can’t be many organizations that would say “This network over here, it’s in the old building, and it’s just the IT department, we hide them away from everybody. Anyway, we’re a financial services organization, we’ve got the suits over here in the big, pretty building, but the old ugly building over here, let’s not put a firewall on that network. Let’s not worry about it because it’s the old network, so it doesn’t matter…”

But that’s what we’re talking about here. It’s an absurd idea on a human level, you’d never do it. But plenty of organizations do it on a data level. And that right there is your security risk.

The other thing we see is that sometimes, data’s stored in an old application, or behind an old application, or an old database. It’s there for a purpose, because there’s still an organization that needs access to the new data that’s being put into that system. But there are just a few people or a small department that uses it.

And all the old data is still sitting there in that system, maybe with older controls. But there aren’t that many people that know the security that protects it. So when new people come in, they have new ideas, and technology’s moved on, and so things get changed – and nobody bothers about the old data, because there’s no reason to disturb it.

But it’s still there.

It’s a security risk that they deem to be acceptable.

We did a study from 2019 to 2022, and we looked at 99,000 security breaches. Now, a breach can be a really small loss, a couple of records, an email sent to the wrong person. The ICO in the UK said “Hey, these are all the breaches that happened.” We went in and studied it, and the number of records that were taken in that three-year period… care to take a guess?

THQ:

Let’s not be unnecessarily silly.

TR:

Around 200 million records, taken. In the UK.

What’s the population of the UK?

THQ:

67.8 million in 2023.

TR:

So that’s at least twice as many, almost three times. Give or take three million.

THQ:

Ach, what are 3 million data records lost between friends…

Sometimes, the cost of a security risk is not just the loss of data.

Sometimes, the cost of a data breach is not just the loss of the data.

The data threat is inside the building…

TR:

The point being that the scale of the security risk is almost three times as large as the population of the UK. Now, not all of those breaches are big and meaningful, and probably, as we’ve said, not all of them were deliberate.

But they don’t need to be.

Do you want to know the top industries for data breaches?

THQ:

With a certain amount of bracing for impact, sure, hit us.

TR:

Education is the top number. And right below that is healthcare. And then you get to government. Finance is pretty low – which makes some sense, because it’s an industry that has the latest, most up-to-date data security.

THQ:

Meanwhile, chunks of the country’s critical national infrastructure are subject to this level of security risk.

TR:

And most people think the threat comes from outside – that’s the model we have in our heads, the attacker from without.

THQ:

We’re human beings, we love a good narrative of persecution from The Other. You may have noticed…

Inattentive Terry could be as much of a security risk.

We love Office Terry. We appreciate his beard. But sometimes, Office Terry is a security risk. There, we said it.

TR:

Ha. Yeah. But from this study, 40% of these breaches were insider threats. Insider threat does not mean Edward Snowden is inside your systems. It’s probably Terry from Accounts, he wasn’t really thinking and sent an email with an Excel spreadsheet to somebody he wasn’t supposed to. Then maybe Terry clicked on the wrong link that said “Free Oracle Training”… and it was ransomware.

There are a thousand different ways these breaches could happen. But the fact is, your firewall’s out front and a lot of the controls that organizations really rely on for the security of the infrastructure and the endpoints are pointing in the wrong direction.

And, more importantly, they’re not the right technology when it comes to protecting your data.

The real cost of data breaches.

In Part 3 of this article, we’ll take a look at what the right technology to protect your company from become one among the hundreds of millions of data breach statistics might be.

 

The post The scale of the sleeping data security risk appeared first on TechHQ.

]]>
The dangers of valueless security risk https://techhq.com/2023/12/why-is-storing-data-indefinitely-a-security-risk/ Tue, 19 Dec 2023 09:30:45 +0000 https://techhq.com/?p=230830

• Storing endless amounts of valueless data is an unnecessary business expense. • It’s also potentially a security risk, because valueless data tends not to be well protected against hackers. • How can you mitigate the security risk of such endlessly stored, never-used data? Our modern business world, more than the business worlds of any... Read more »

The post The dangers of valueless security risk appeared first on TechHQ.

]]>

• Storing endless amounts of valueless data is an unnecessary business expense.
• It’s also potentially a security risk, because valueless data tends not to be well protected against hackers.
• How can you mitigate the security risk of such endlessly stored, never-used data?

Our modern business world, more than the business worlds of any previous era, is data-driven. Data provides in many cases the real value proposition of many businesses – from vast social media platforms to competitive widget-making firms in the middle of nowhere. But where there is a vast amount of stored and historic data, there’s also a security risk. Just because the company which initially collated the data for its own purposes may have finished wringing value from it, it doesn’t mean that the data has actually lost its potential value to brokers and bad actors.

We spoke to Terry Ray, SVP at Imperva, about how much of a security risk was inherent in data thought of by companies as being “dead.” He defined it as being a “valueless security risk.” We wanted to know more.

THQ:

Define a valueless security risk for us? What do we actually mean by that?

TR:

It’s a security risk that comes from data that isn’t necessarily valueless to everybody, but that no longer has value to the collecting organization.

For example, my name, address and phone number are going to be as valuable to an organization today as they were five years ago, and as they will be five years from now – they’re ways the company has of contacting me, a once and future customer.

But where I shopped, the things I bought, even my credit card number from five years ago, things that I bought for other people, my healthcare information that was valuable while I was being billed, that’s all data that probably isn’t really all that valuable, until maybe I need it for some other procedure down the road.

So the point is, valueless data is only valueless relative to the people who have the data in their systems. But there’s a lot of it. Organizations have vast volumes of data that was used at one point, but they never really figured out who owns that data, how the data was really used, and whether or not they can in fact delete it.

It’s very rare for an organization to say “I don’t know who owns this data, and I don’t know how important it is, so I’m just going to delete it and see if somebody screams.”

They don’t do that. Nobody does that, because of the risk that if it was valuable and you flushed it, you just cost the company potential value for the sake of your own convenience.

The drawer of miscellaneous data.

THQ:

So to use a domestic example, companies run their data policy like a “miscellaneous drawer”? That drawer into which you put random buttons and pieces of string, rather than throwing them away, because “they might be useful one day,” and you’re scared that the moment you throw them away, you’ll suddenly have the perfect use for a random button and a piece of string?

TR:

Exactly that. Except companies rarely if ever open the drawer and do inventory of their growing button-and-string collection, to even see if there’s anything useful in there.

What can be done about the data security risk of data storage.

Data security – either you have it or you don’t.

And the point is, there may be people out there who can make illegal money out of the contents of your drawer – and who are prepared to go to some lengths to steal it and monetize it. That’s your valueless security risk.

THQ:

Got it. So why are companies still leaving themselves open to that risk? What’s the impulse in businesses to just keep hold of data? Is it just that impulse of saying “I don’t know who else has got copies of this, or when it might be useful again”? Or is there more to it than that?

TR:

Well, there are a few things, to be fair. Each individual region, each individual industry has its own data retention requirements, so in the US, a lot of the data retention requirements around healthcare, for example, are usually 7 years, and can be anything up to 21 years for paper-trail data. So at least for paper-based data, we have these huge libraries of data, despite so much of that data having now gone electronic too – in duplicate. So there is that factor of how long companies are legally required to keep information.

Storage isn't a security risk - providing you secure the data you're storing.

“There’s a minute from a board meeting in 1979 that I need. In here…somewhere…”

But that’s just the regulated data. And because it’s regulated, organizations tend to prioritize their controls when it comes to data security around it.

The ever-growing data issue.

THQ:

The corollary to which being that organizations don’t prioritize unregulated data?

TR:

Right. If you’re not guaranteed to have an audit on it, it’s just not as important to the organization.

That means they end up with two separate batches of data – regulated data and unregulated data. The unregulated data is where that “Can I delete this? Better not, you never know” mindset comes in. Because, as with your string and buttons, I guarantee you, the second you delete it, it’ll turn out to have customer information in there, and now someone’s asking you about this customer, and you want to go back and help them understand the history behind that customer.

So you don’t delete it.

Any of it.

The point then is that unless someone owns that data, (and that’s a hard thing to find, the data owner within a company), and that person tells me we don’t need that data anymore, then I’m not going to delete it. I’ll archive it in long-term storage, and the chances are it’ll never be needed or looked-at again. It becomes, to the organization, valueless data, being retained potentially until the end of human civilization, unexamined.

THQ:

In the Limbo Drawer of Useful Items.

TR:

Exactly. And the other challenge that these organizations run into is where they put their controls. Which translates as the ways they have to go look for this data. Usually, once it gets archived into long-term storage, it’s no longer indexed into corporate search functions.

Where’s the security risk?

So it starts to beg the question – well, if I’m going to dump it into a long-term storage that’s really hard to index, if somebody really was looking for that data because, for example, they suddenly had need of a really interesting button and a perfect piece of string, to what lengths would they go to find the data, versus just giving up and saying “You know what, I’m not going to spend the next three months hunting for this piece of information. It’d be faster for me just to go find another avenue to answer whatever question I need to.

Same as with the items in the Limbo Drawer of Useful Items. If you forget which drawer it is, and it’s probably in the attic or the basement behind a whole bunch of heavy items you’ll have to move to get at it…

THQ:

You’re gonna go and buy a brand spanking new button.

TR:

Right. So what you have is a valueless drawer of buttons. Or a valueless data security risk.

Whereas if we apply the right controls, it gives us more information and more ability to dig into that data. For example, when we look at data security, for example, data monitoring, data classification, discovery, those kinds of things, if I’ve done a data classification project, and then I go and look for exact certain types of data, whether I archive that data or not, I still know what server it was in, I still know what format it might have been in, I might know something close to the exact file name or the exact table or column in which it existed in the database. But at least I have a general idea of where it might be.

And more importantly, files are interesting. And they’re also nice enough to tell you the last time somebody actually accessed or modified them, so that’s helpful. So even if you wanted to do retention, you could say, just archive everything that hasn’t been touched in a decade.

Maybe that makes me feel safe – if it hasn’t been accessed in a decade, we can get rid of it, because clearly, no-one’s desperate to look at it.

Databases, sadly, don’t tell you the last time an individual record was touched, so you have to use some kind of technology that says “I’m watching what happens in your files, I’ll tell you, I can validate that their dates are the same as our dates.”

Touch my data and I’ll know.

My third-party product, sitting here watching all use of it, can tell me “I have no record of anybody touching that table in a year.” Or “I know that Terry, in fact, touched that table just last week.”

So then you maybe go ask Terry what was in it, and why he opened it. Is it important data? It gives you this inkling of information to go and start to find an avenue and say “We’re keeping the server but we want to clean it up. Can we do that safely? Can we start to delete some of this data? Otherwise, it just grows and grows.

And of course, the Oracles and Microsofts of the world are happy to sell you more file space and more storage and more file protection, and everything that goes with it. And before you know where you are, you have quite the bill to safely store data about which you have no idea – whether it’s relevant, vital, whether it’s useless to you.

Why would you turn up your security risk by using poor data management and security?

Why would you turn up your security risk by using poor data management and security?

THQ:

Paying for extra space in the Drawer of Useful Items that you’re never, ever going to open.

TR:

Yeah. That’s all it is.

“Fools! Bureacratic fools! They don’t know what they’ve got there.” Raiders of the Lost Ark exemplifying the problem of data without management or security.

In Part 2 of this article, we’ll look at why storing your valueless data ad infinitum is not just an expense but also a potential data security risk.

 

The post The dangers of valueless security risk appeared first on TechHQ.

]]>
UK at high risk of catastrophic cyberattack https://techhq.com/2023/12/why-does-house-of-commons-committee-report-say-uk-at-catastrophic-cyber-risk/ Fri, 15 Dec 2023 12:30:48 +0000 https://techhq.com/?p=230705

• House of Commons Committee declares UK at “high risk of catastrophic cyberattack.” • Fractured legacy technology infrastructure and an evolving cyberthreat pinpointed as urgent issues. • The issue will be dwarfed by the UK’s economic woes at the next election. “Poor planning and a lack of investment.” According to the more acid-tongued political observers... Read more »

The post UK at high risk of catastrophic cyberattack appeared first on TechHQ.

]]>

• House of Commons Committee declares UK at “high risk of catastrophic cyberattack.”
• Fractured legacy technology infrastructure and an evolving cyberthreat pinpointed as urgent issues.
• The issue will be dwarfed by the UK’s economic woes at the next election.

“Poor planning and a lack of investment.” According to the more acid-tongued political observers in the UK, that’s a judgment that could be applied to any number of areas of any current government. But there’s a difference between the judgment of media observers and the pronouncements of the UK’s Joint Committee on the National Security Strategy.

The Committee, comprised of members of both the UK’s elected chamber (House of Commons) and its appointed chamber (House of Lords), has drawn attention to a paucity of both government funding and strategic planning for the cyber-safety of the nation, saying the country’s critical national infrastructure is “at high risk of a catastrophic cyberattack.”

Few in the House of Commons dare to use the word “catastrophic” without proof to back up the term. Doing so tends to be thought frivolous, and can end the career of any Chicken Little Member of Parliament (MP) who cries that the sky is falling in when it turns out not to be. It should be stated, however, that Committee pronouncements are commonly accepted to be a conduit through which more partisan language and thought might flow. The two chambers of the UK Parliament are traditionally more reticent.

The House of Commons Joint Committee on National Security Strategy.

The House of Commons Joint Committee on National Security Strategy has spoken.

The proof behind the Committee’s assertion is set out in its report. It acknowledges three significant factors that make the UK’s critical national infrastructure especially vulnerable, leaving a fourth unspoken but the province of observer gossip.

Vulnerability #1: fractured legacy technological layers.

Technology in the UK’s critical national infrastructure (CNI) is not only dependent on legacy equipment, but on layers of legacy equipment, some of which are not interoperable across departments, geographical sites or timeframes. That’s down to a lack of consistent funding and determination over time, meaning a large problem has been poorly addressed with piecemeal solutions.

The report explains that:

“• In the context of ‘ever-increasing digitalization of the UK’s CNI operations,’ many CNI operators are still operating outdated legacy systems. According to Thales, it is ‘not uncommon’ to find aging systems within CNI organizations with a long operational life, which are ‘not routinely updated, monitored or assessed.’ The increase in hybrid and remote working also brings additional risks.

  • Legacy operational technology (OT) poses a particular challenge: digital transformation is resulting in these assets, which were ‘never designed with smart functionality in mind,’ being ‘overlayed with IT and hyperconnectivity.’ OT systems are ‘much more likely to include components that are 20-30 years old and/or use older software that is less secure and no longer supported”.

Thales is seeing ‘increased [threat actor] activity across all of the critical national infrastructure sectors,’ with a move towards attacks on certain types of OT. Reliance on digital systems also means that attacks against operators’ wider IT systems can force companies to shut down their OT—as in the case of the US Colonial Pipeline attack, in which the affected systems were responsible for corporate functions such as billing and accounting.’”

It is worth noting, however, that Thales is a long-standing government contractor tasked with both physical and cyber defense provision, so it will certainly have monetary skin in this game.

As will surprise no one, the UK’s National Health Service (NHS) – a single nationwide healthcare provider for everything from antibiotics to brain surgery and ER-based trauma – was a source of particular despondency when it came to legacy technology vulnerability.

“• The NHS remains particularly vulnerable: healthcare is a ‘large and growing target across Europe,’ and the NHS operates a ‘vast estate of legacy infrastructure,’ including ‘IT systems that are out of support or have reached the end of their lifecycle.’ This puts it in a ‘particularly difficult position to protect itself from cyberattacks,’ despite the fact that many critical medical devices and equipment are now connected to the internet. Many hospitals lack the capacity to undertake even ‘simple upgrades’ as a result of crumbling IT services and a lack of investment.’”

Critical infrastructure technology.

The UK’s critical national infrastructure is at least partially decades out of date and out of warranty.

Vulnerability #2: the evolving nature of the threat.

While ransomware has been a significant threat to UK critical infrastructure for some time, the Committee’s report dwelled at length on the evolving nature of the cyberthreats the UK faces – especially as the evolution is happening faster than the technology underpinning critical infrastructure has can be refreshed in practical terms.

“Witnesses were almost unified on the changing nature of the threat, describing the evolution of a mature and complex ecosystem with a ‘cell-like architecture, akin to other forms of serious organised crime.’ Key developments include:

  • The growth in ransomware-as-a-service (RaaS), in which an efficient division of labour has evolved. Typically, ‘initial access brokers’ will achieve the initial hack and sell the access onto ‘affiliates;’ ransomware operators will also sell a malware source code to affiliates (and might also negotiate with victims); and affiliates will then pay a service fee to ransomware operators for every collected ransom. These ‘groups’ of actors are connected in quite loose ways, making attribution of responsibility for attacks more difficult. This efficiency of specialization has increased the tempo of ransomware operations. It has also lowered the cost barrier to entry into ransomware, because less sophisticated criminal groups (affiliates) can purchase the required technology to conduct more advanced attacks. One witness described the typical threat actor now as ‘quicker, more agile and brazen.’
  • Innovations in marketing, recruitment and communication: RaaS operatives are known to offer their services on a monthly subscription basis with optional extras, and have actively recruited affiliates. Groups operate on closed chatrooms to communicate with one another, and some even act like legitimate enterprises, establishing HR functions to coordinate their annual leave.
  • A shift towards larger, higher-value targets (sometimes described as ‘big game hunting’), with threat actors developing more ‘sophisticated weaponry’ and achieving much larger ransom payouts.
  • An increase in double or triple extortion methods, in which ransom demands are linked to threats to publish sensitive data online; in these cases, the data may be “exfiltrated” (removed) rather than encrypted. Organizations are thus held to ransom on the grounds of confidentiality (release of sensitive data), and not just availability (access to files). In triple extortion, the victim’s customers or suppliers may be threatened with the release of sensitive data if they do not pay a further ransom; a “premium subscription” might also be on sale—to the victim and others—in exchange for exclusive rights over the data.”

Vulnerability #3: geopolitics

The report, authored by members of both the House of Commons and the House of Lords, from across the political spectrum – cites external geopolitics as a factor in the UK’s particular vulnerability.

“The National Cyber Security Centre’s (NCSC’s) 2022 Annual Review noted that most of the ransomware groups targeting the UK are ‘based in and around Russia,’ benefiting from ‘the tacit consent of the Russian State’;

  • The NCSC’s Annual Review 2023 raised the same concerns but placed emphasis on the development of ‘a new place of cyber-adversary’ who are often ‘sympathetic to Russia’s further invasion of Ukraine and are ideologically, rather than financially, motivated.’
  • In its written evidence to this inquiry, the Government stated with near certainty that ‘the deployment of the highest impact malware (including ransomware) affecting the UK remains concentrated mostly in Russia;’ and
  • DXC Technology, a US IT company, told us that, of the ten most prolific and dangerous ransomware strains identified by the NCSC’s Ransomware Threat Assessment Model, eight are ‘likely based in Russia.’

According to RUSI, some of these groups are experienced in this evolving field of offending: in many cases, the same Russian actors were conducting ‘malware and botnet operations’ against UK financial institutions from 2010 onwards, and have subsequently ‘pivoted their business model’ towards ransomware operations. The lines between state activity and criminal groups are also blurred.

Prior to Putin’s full-scale 2022 invasion of Ukraine, it harbored an element of the ransomware threat: Jamie MacColl from RUSI commented that the ransomware ecosystem contained ‘multiple nationalities from former Soviet Union countries, including Ukraine.’ The NCA told us that it had worked with the Ukrainian Government in the past to investigate and arrest some of those offenders, but that the Ukrainian attackers had subsequently either gone to Russia or had ‘turned to attacking Russia,’ rather than the West. The impact of the war on cyberthreat levels overall appears mixed: a reported wave of cyberattacks against Ukraine encountered strong defences, and some downward global trends have been attributed to the war distracting Russian aggressors away from conducting ransomware attacks. It has also caused splits within ransomware groups, with members coming out for and against the Russian Government. This splintering may have made such groups even harder to disrupt.”

Putin as a hacker.

The intensification of pro-Russian hacking has an effect on the UK’s vulnerability.

The unspoken vulnerability.

While the report runs to a handful of pages on specific actions that should be taken to address the parlous state of the UK’s cybersecurity in critical national infrastructure, the unspoken vulnerability remains visible in the condemnation the report withholds for those two central weaknesses – a lack of funding, and a lack of planning.

To deliver on either of those things, a government has to be willing and able to focus on the widespread cyberthreat on the nation’s critical national infrastructure in the long term.

A revolving door.

The government of the UK has been a revolving door for 7 years, with five Prime Ministers in that time.

The UK has had what is technically one government since 2010. During that time however, it has had no fewer than five Prime Ministers, all from the Conservative party. All but the first (David Cameron) have campaigned, won, and governed with a single central political goal – making a success out of the 2016 Brexit vote, which divorced the UK from the rest of the EU.

Until recently, delivering a successful post-Brexit reality has been the main concern of both the Home Office and the Foreign and Commonwealth Office, leaving little in the coffers for any fundamental revamp of cybersecurity across the UK’s critical national infrastructure.

Add the hugely negative economic impact of Brexit (a sudden lack of equal access to the UK’s closest economies, labor, and law) and you have a situation that has taken up the vast amount of government time, focus, and money.

It has also been a period of extraordinary upheaval – one of the Prime Ministers (Liz Truss) lasted just 44 days, having managed to cost the UK economy £30 bn single-handedly in little more than a day.

Obviously, there was the Covid pandemic, and the economic time-bomb of several extended national furloughs. A Parliamentary enquiry was ongoing, even as the Committee’s findings were hitting House of Commons printers, over the extent to which the Covid response by the UK government was mishandled.

Prime Minister Boris Johnson was forced to resign after it was discovered he had knowingly lied to both the House of Commons and the nation. Inflation soared, creating a cost of living crisis and forcing whole groups of workers to strike for decent wages – including several groups in the NHS.

The report warns amelioration could cost the country “tens of billions.” The UK needs a government that’s stable and focused on solving the cyberthreat problem long term, rather than delivering piecemeal solutions to parts of the infrastructure.

The cyberthreat question may have snuck up on the government over time – but the UK government has also never been stable enough in 13 years to tackle it effectively.

In the next 12 months, there will be a general election in the UK. The cybersecurity of the country’s critical national infrastructure is unlikely to be a front-line campaign issue in a country with underfunded schools, hospitals, and infrastructure, and an ailing economy where the gulf between rich and poor is the widest in the world (save the US).

It is to be hoped though that reports like that of the Joint Committee can at least make CNI cybersecurity a tangential priority of whichever party forms the next working majority in the House of Commons, and the country’s next government.

NB – this was five years ago, after the WannaCry attack. Nothing appears to have improved since then.

The post UK at high risk of catastrophic cyberattack appeared first on TechHQ.

]]>
Windows Hello says goodbye to laptop security in testing https://techhq.com/2023/12/why-are-manufacturers-failing-their-windows-hello-implementations/ Tue, 05 Dec 2023 15:00:31 +0000 https://techhq.com/?p=230479

• Windows Hello – the fingerprint-based biometric security system – has been cracked by vulnerability investigators. • Windows Hello depends on two key systems, the MOC sensor and the SDCP protocol. • Investigators from Blackwing found device manufacturers were not diligent in maximizing the protection of laptops. Hubris is a wonderfully entertaining thing to watch... Read more »

The post Windows Hello says goodbye to laptop security in testing appeared first on TechHQ.

]]>

• Windows Hello – the fingerprint-based biometric security system – has been cracked by vulnerability investigators.
• Windows Hello depends on two key systems, the MOC sensor and the SDCP protocol.
• Investigators from Blackwing found device manufacturers were not diligent in maximizing the protection of laptops.

Hubris is a wonderfully entertaining thing to watch play out on a stage. It’s significantly less wonderful if your laptop’s super-duper biometric security system comes with false pride built in. This does not appear to have occurred to the manufacturers of several laptops protected by Windows Hello – the fingerprint scanner that’s supposed to deliver personalized, biometric security for all your sins, secrets, and sales figures.

At least, it didn’t necessarily occur to them at the appropriate time.

The ideal time for comprehensive bug-testing and security audits is before you release a piece of hardware into the wild. But when Microsoft’s Offensive Research and Security Engineering (MORSE) asked Blackwing Intelligence to test out the security on a number of new laptops that were advertised as being compatible with Windows Hello, it’s probably safe to say they expected a glowing report.

If so, what followed was probably quite a gloomy day at MORSE headquarters.

MORSE gave Blackwing three of the most popular laptops that claim to be compatible with the Windows Hello fingerprint security protocols: the Dell Inspiron 15; the Lenovo ThinkPad T14; and the Microsoft Surface pro type cover with fingerprint ID (for Surface Pro 8/X).

Blackwing ran a vulnerability search process, including significant reverse engineering of both hardware and software, cracking cryptographic flaws, and deciphering and reimplementing proprietary protocols. And while all of those processes took the experience away from the Mission Impossible idea of rendering top-notch security invalid with a paperclip and a ballpoint pen, the end result of Blackwing’s efforts was a full bypass of Windows Hello.

On all three laptops.

While the number of processes through which Blackwing put the laptops to thoroughly diagnose their vulnerabilities was extensive, there were two core elements of the machines that allowed the investigators to entirely bypass Windows Hello.

They were 1) the match on chip sensors, and 2) the secure device connection protocol.

MOC the weak?

Match on chip describes the kind of sensors used in Windows Hello. There’s an alternative, match in sensor, but critically, Windows Hello doesn’t work with those sensors, so any machine that says it’s compatible with Windows Hello will have match on chip sensors.

That’s what bad actors call a single point of vulnerability, and it’s why, for instance, Blackwing was able to perform the same sort of bypass of Windows Hello across three entirely different machines made and branded by three different manufacturers.

Match on chip sensors (MOC sensors) contain a microprocessor and storage built into their chip (the clue is in the name). That setup means fingerprint matching is done locally too, the scanned print being checked against an on-chip database of templates (which you set up when you start to use Windows Hello).

In what is in fact fairly sound engineering theory, that should make for safer biometrics. The prints never leave the chip in the scanner, so the risk of them being stolen is vanishingly small.

Windows Hello uses a particular kind of sensor.

Fingerprint profiles are stored on chip in Windows Hello-compatible machines.

That theoretical extra safety measure is why Windows Hello requires the use of this kind of scanner.

So far, so good, right?

The second line of defense.

Yes, as far as it goes. Sadly, with a devious enough mind and something a touch more sophisticated than a screwdriver, it is still possible to get a malicious sensor to spoof a genuine one in its communications with the host and persuade the system that it’s been verified when it hasn’t.

Spoofing is equivalent to sliding into someone’s DMs… and then stealing their identity, their protocols, their privileges, and probably their house to boot. The malicious sensor sends communications to the host to say everything’s just fine and dandy, and it should allow the verification of whatever it’s being asked to verify, while the innocent sensor is, by contrast, baffled and silent, locked in the electronic equivalent of the basement.

So while match on chip sensors are technically extra secure, they do have a fairly well understood weakness.

The point being that Microsoft… knows that.

In fact, it knows it so thoroughly that it created a whole protocol to ensure the security of the connection between the sensor and the host, to effectively lock out any malicious sensors and make sure the host is communicating strictly, securely, with the innocent sensor with the match on chip.

The protocol is called the secure device connection protocol, or SDCP.

The second element of vulnerability that allowed Blackwing to leave Windows Hello weeping, in pieces, at the feet of all three laptops.

Windows Hello - not as vulnerable to hackers as it is to inattentive manufacturers.

Windows Hello – not as vulnerable to hackers as it is to inattentive manufacturers.

Again, in perfectly sound engineering and computing theory, the SDCP exists to do one thing – to make communication between the sensor and the host reliably secure.

To do that, it needs to make sure the fingerprint device is trusted and healthy, and it needs to ensure the security of the communication between the sensor and the host.

To achieve that, it needs to answer three questions about the fingerprint sensor: how can the host be certain it’s talking to a trusted device and not a malicious device?; how can it be sure the device hasn’t been compromised?; and how is the raw data protected and authenticated?

If it can answer all three of those questions, then in theory, it should be able to operate with a degree of communication certainty that would make Windows Hello as safe as it needs to be. And, crucially, as safe as it’s marketed to be, as a biometric security system to keep systems like laptops as private as we want in this supposedly vunerability-conscious age.

Three professional months to crack Windows Hello.

As with the MOC technology, the SDCP is actually a more-than-reasonably clever way of shutting down the sneaky operators who would try and get past even the most modern of security systems.

But Blackwing managed it. It managed it with a 100% reliable bypass rate, across three different machines.

What’s the takeaway? That Windows Hello is fatally flawed and not worth the silicon it’s written on?

Absolutely not. It’s worth noting that Blackwing does this sort of thing for a living, and it took its investigators a solid three months of daily access to work out how to compromise the system. Once it’s been done once – or indeed, three times – of course, the process can be sped up and streamlined, but still, the weakness appears not to be in Windows Hello itself.

In explaining the functions of the MOC and the SDCP, we’ve only taken you to the gates of the problem. If you appreciate extremely technological cleverness, reading the original Blackwing report on the process of breaking Windows Hello will make you both boggle and chuckle in techie.

Shields up, red alert! No, really – raise the shields!

Windows Hello has two lines of initial defense. One of them wasn't switched on in two out of three cases.

One of these things is not like the other… Device manufacturers would do well to tell them apart.

The point, as Blackwing concludes having spent three months on the problem, is not that Windows Hello is particularly weak, but that device manufacturers either don’t understand it, or don’t do a sufficient amount of configuration and testing before their machines are sent out into the world. Sensor encryption generally used poor quality code, and there’s a likelihood that the sensors used by manufacturers are subject to memory corruption.

But the big spoiler is one that will make bad actors belly laugh.

In two out of the three devices, Blackwing found that the SDCP – the protocol designed specifically to establish secure communication between the sensor and the host, and so close out the loophole in MOC sensors – wasn’t switched on by default.

There are reasons why system engineers twitch whenever anyone dares use the word “foolproof.”

Blackwing is hoping soon to turn its attention to systems by Apple, Android, and Linux.

Watch this gaping hole in security protocols for more as we get it…

 

The post Windows Hello says goodbye to laptop security in testing appeared first on TechHQ.

]]>
How to bring streaming, business telephony and environmental sustainability together https://techhq.com/2023/11/can-environmental-sustainability-co-exist-with-streaming-culture/ Wed, 15 Nov 2023 17:00:45 +0000 https://techhq.com/?p=229853

• Entertainment streaming and environmental sustainability are like oil and water. • Business communication can be improved without increasing the burden on a firm’s eco-footprint. • There’s a material advantage to streaming platforms ditching data centers full of high-quality audio files. In Part 1 of this article, we spoke to Rob Reng, CTO of IRIS... Read more »

The post How to bring streaming, business telephony and environmental sustainability together appeared first on TechHQ.

]]>

• Entertainment streaming and environmental sustainability are like oil and water.
• Business communication can be improved without increasing the burden on a firm’s eco-footprint.
• There’s a material advantage to streaming platforms ditching data centers full of high-quality audio files.

In Part 1 of this article, we spoke to Rob Reng, CTO of IRIS Audio, an AI audio startup which aims to deliver clearer audio in call center settings without the carbon burn that transmitting high-quality audio currently entails. He explained the ecological impact of our 21st century streaming culture and our post-pandemic use of video conferencing software to keep businesses – and the remote and hybrid working culture – alive, in spite of their impact in terms of environmental sustainability.

In Part 2, Rob outlined the philosophy and the beginnings of a technology that could let us eat our audio cake and have it too, without burning the planet to the ground.

As well as vindicating those of us old enough to remember a “download once, play forever” techno-paradise, he explained the cleverness behind the IRIS solution, which involves transmitting low-quality (and therefore low cost in terms of environmental sustainability) audio, using AI in situ at the receiver’s end to boost the quality to high, and then distributing the processing cost across lots of devices, saving the environmentally ruinous data center storage and cooling costs.

Streaming platforms are the inconvenient enemy when it comes to environmental sustainability.

Every time you stream Nickelback, a tree cries in the forest.

Most of the time, the tech industry does its level best to harden our shell of journalistic skepticism. Every now and then though, we come across a tech solution that takes us back to our cub days of changing the world one blistering sentence at a time, so we indulged ourselves, trying to find out a little more about how AI might help save the planet from its latest Taylor Swift-flavored eco-disaster.

The market for audio environmental sustainability.

THQ:

So tell us more about the project you’re working on. How do you sell it? And who do you sell it to?

RR:

As I said, at the moment, we’re focusing on our voice product, IRIS Clarity.

A lot of the use cases for that are in call centers. IRIS Clarity will remove all the noise and all the rubbish you don’t want to listen to. And then, what we’re calling IRIS Super HD will take what you have and make it sound much more pristine and high-quality.

The two of them together give you an exceptionally clear and very high-quality communication, without the eco-impact of transmitting heavy, high-quality audio from place to place.

And while it will sit very well in the world of call centers and sales teams, it’s not just a product for those worlds. We also do a lot of work in motorsports and emergency communications and all these types of situations that use analog radio, where the signal is routinely absolutely terrible.

And, especially in the emergency services, you want reliable, clear communications wherever you are. So we see that as a very important place where our technology could perhaps make a critical difference.

Emeergency services need clear communications at all times.

“No, I said SPLEEN. RUPTURED SPL-Hello? HELLO??”

Saving the streaming world from itself.

THQ:

You mentioned in Part 2 that you were starting with the voice product, and there’s an undeniable sense to that – if we can lift the carbon burden off firms with call centers, firms with sales teams, and also critical services like emergency responders, that has to be a big easing of the carbon weight of significant chunks of voice transmission in society.

But you also mentioned eventually moving on to offer something similar to the music-streaming mega-platforms, given that they burn whole airline companies’ worth of carbon each year in terms of streaming entertainment.

How do you make the shift from emergency response communications and call centers to entertainment streaming platforms and – presumably – the significantly more complex sound profiles of songs for streaming?

RR:

Oh without a doubt, music streaming is harder to do – it’s a significantly harder problem to solve, because voice compression versus music compression is quite complicated. But it’s not a problem that can’t be solved, it’s just going to take a little bit longer to get that product to market. And then it’s just a case of proving to the big streaming services like Spotify that you can take away all of the costs they have of storing high quality audio.

Lots of songs have been streamed over a billion times.

Lots of songs have been streamed over a billion times. A billion times. With a B…

Those costs will be ridiculous, when you think about 50 million files, each one of them in high-definition audio. Their storage costs are going to be phenomenal. So if you can prove to them that they can take all that cost away, and just run our algorithm instead, it should be something they’ll bite your hand off to do.

THQ:

And then of course they can sell it to customers on the grounds that “We’re the green streaming service” because they’ve ditched the need for that eco-spensive data center.

RR:

Eco-spensive?

THQ:

It just came to us, we’re going with it.

RR:

Err… then, yeah. It allows them to claim to be the green streamer, 100%. And they could back that up with hard evidence. So it becomes a nice PR story, and something that makes them feel warm and fuzzy.

Selling environmental sustainability to Gen Z.

THQ:

Yeah – we mainly mentioned that perception thing because in terms of younger generations, who are heavy streamers, there’s a lot of evidence of them basing their commercial decisions on the ethical concerns that they have, and that particularly extends to the ecological soundness of companies. So there’s a real likelihood that that’s an audience that fits this kind of technology, if it can be broadcast to them.

RR:

Absolutely. It’s a nice thing to have, because you can make the platforms’ life easier and more cost-effective, and bring legitimate environmental sustainability credentials. There’s a lot of power behind that movement. If it was only a green movement, and it didn’t save them any money, then you probably wouldn’t get much traction, but the two of them together? It’s a match made in Heaven.

Streaming and environmental sustainability needn't be at aodds.

Before you stream, ask yourself how eco-spensive it is.

THQ:

And of course, a little perversely, once one platform has announced it’s the greener streamer, it might go quite some way to combatting the current lack of awareness of the environmental sustainability costs of streaming, because the audience for streaming services will go “Greener that what?”

RR:

And begin to ask questions, absolutely. There’s going to be quite the market bonus in being among the first streaming platforms to use our services, or something like them, to ditch the… if you insist, eco-spensive data center, because it will put them ahead of the competition but also at the forefront of awareness-building among that audience, which as you say, bases a lot of its decisions on a company’s green credentials.

THQ:

Perhaps a match not quite made in Heaven, then. Made in Eden? Maybe.

 

The post How to bring streaming, business telephony and environmental sustainability together appeared first on TechHQ.

]]>
An environmental sustainability solution for streaming and calls https://techhq.com/2023/11/how-do-youmaintain-environmental-sustainability-while-streaming/ Wed, 08 Nov 2023 10:30:21 +0000 https://techhq.com/?p=229558

• Environmental sustainability does not sit well with 21st century entertainment streaming. • But high resolution business telephony is also a culprit in imperiling environmental sustainability. • Streaming has become the new normal – so technology needs to evolve to make it work for the environment. In Part 1 of this article, we spoke to... Read more »

The post An environmental sustainability solution for streaming and calls appeared first on TechHQ.

]]>

• Environmental sustainability does not sit well with 21st century entertainment streaming.
• But high resolution business telephony is also a culprit in imperiling environmental sustainability.
• Streaming has become the new normal – so technology needs to evolve to make it work for the environment.

In Part 1 of this article, we spoke to Rob Reng, CTO of IRIS Audio, an AI audio startup which aims to deliver clearer audio in call center settings without the current weight of energy wastage or carbon burn.

In fact, Rob explained to us the true ecological cost of our post-pandemic world of streaming audio and video – globally each year, the world burns as much carbon in streaming media as the whole of Spain burns across all its industries in the same period. Thousands of transatlantic flights-worth of carbon emissions are released by individual mega-selling Spotify song streams.

The consequences of this invisible streaming cost are profound – video conferencing with images costs carbon. YouTube, Spotify, Alexa – carbon. All the systems companies have traditionally used to improve audio quality in their telephonic systems – burn carbon. None of which is good for our environmental sustainability.

Which is what led IRIS to invest in an AI-based way to reduce the amount of high quality (and so, heavyweight) audio transmitted in scenarios such as sales and customer service calls.

In the IRIS solution, sound is sent from wherever it’s produced or stored in low quality, and AI at the receiver’s end boosts it to high quality in situ. That means there’s no perception of a quality drop, and no need to send heavyweight high quality audio, incurring the carbon cost that entails.

The quality of audio is not strained…

We had an obvious question.

THQ:

How does that process actually work?

RR:

Well, in some respects, it’s the same as all other other machine learning-based algorithms. You can teach an algorithm to do almost anything based on the training data you use. So if your training data is vast, which it has to be, you can teach a piece of software to make a prediction, because compression, whether it’s voice or music, is all about removing things that it believes you don’t need.

So for voice content, normally the transmission process just shaves about 4 kilohertz, (human hearing goes up to 15 kilohertz). It just takes away that top nine kilohertz of sound.

When you’re hearing somebody on the phone, and you think, “Oh, God, this sounds terrible,” that’s because all of that audio information has just been brutally discarded.

So, given what we see at the bottom of the range, we can then predict what should have been there. Once you know that, it’s just a case of filling in the dots around it. This has already been done for image recognition, as well as in medical equipment. You intensify and clarify the image by filling in those dots where stuff is missing with elements most likely to have been there.

Environmental sustainability versus call quality?

We’ve all had those “What did you just say?” moments. Some people have them all day long.

THQ:

Oh, like the Samsung camera moon shot idea?

RR:

Something like that, yeah. In the world of imagery, it’s called super resolution. But it’s not really done so much in audio – or at least, it hasn’t been, yet. We’re very keen to take the ideas from the world of imaging and put them into the world of audio.

THQ:

And, without wishing to sound repetitive, how do you do that?

Training AI to respect environmental sustainability.

RR:

We train a network on thousands of hours of high resolution and low resolution sounds. So when it sees something that’s low res, it goes, “Oh, I know how to make that high res.” And it just interpolates and rebuilds what it believes should have been there in the first place. And so far, it’s been very accurate.

THQ:

OK, allow us a moment of Devil’s advocacy here. Because we don’t hear a lot about the ecological impact of audio, but we are increasingly familiar with the potential environmental sustainability impact of generative AI and the data center eco-costs they entail, where’s the balance-point between using an AI system to enhance audio and the eco-costs of generative AI?

RR:

Ah – yes. Well, the point with our kind of system is that it’s running at the endpoint, so the processing power is distributed across millions of small devices, as opposed to having one massive data center crunching Bitcoin or doing what all these other industries do with their huge data farms. That would involve having to power the machines, and cool them down with a lot of air conditioning and fans, etc – which is where the questions of environmental sustainability really kick up a gear.

In this case, when we run it on the endpoint, we don’t have to worry about cloud infrastructure, we’re just deploying the power of a mobile phone or a laptop for the duration of that call. And in addition to that, because it’s a valid concern, we’ve done a huge amount of work to make our algorithms very light, precisely so they’re not eating up all the power of your battery while they’re running. You’re not going to kill your laptop by running our endpoint AI solution or anything like that.

And obviously, by running it locally at the endpoint, firstly, you don’t have to pay for a server, and secondly, you don’t have to have the overhead of all the operational costs and the environmental costs that come with it.

That’s really why we’ve always opted to run our AI algorithms at the endpoint and on the customer’s device – so you don’t pay for it. And you don’t have to worry about the knock-on environmental sustainability effects.

environmental sustainability - often not factored in to auidio decision-making.

Audio quality is quite the recognized issue in business.

THQ:

But you still have to train the system, which is going to require machine learning and the carbon costs that incurs, right?

Bring back the iPod!

RR:

Yes, that’s true. You do need to train your system, either on a machine that you can run in your own office, which is the most cost-effective way of doing it, or, because usually it’s good to get ad hoc training on top of that, we run experiments on Amazon and Google. But obviously, we try to limit that as much as we can, and use our own infrastructure.

THQ:

The only reason we harp on that is because, as we said, there are things that are beginning to become understood in terms of ecological impact, and there are things that aren’t, and audio is something that isn’t yet widely seen by either the general public or the business world as a thing that has to be accounted for in terms of environmental sustainability.

So, do you think people are aware enough yet of those impacts? Enough to make them immediately see the benefits of this kind of system?

RR:

I think things are currently set up to be too easy, certainly. It’s too easy to stream, for instance. The whole ecosystem is set up in such a way as to make it seamless, so you just open Spotify and you have the world’s music at your fingertips, and you can just press a button and the music’s there waiting for you with 5G.

That makes it that much easier than sitting there waiting for seconds or minutes at a time for your song to be downloaded for you to listen to. But that completely hides the fact that there’s music being stored somewhere.

The billions of tracks in various sizes are being stored in huge, huge data centers somewhere. That’s the forgotten part of the whole experience. And rather than, for instance, just downloading the song once, on one device, most people have several streaming services, on several devices, and stream the song as needed on any of them, or many of them. And each time, you’re having an impact, rather than just having downloaded it once and running it off iTunes. You’re incurring a cost each time you stream it.

THQ:

So, on behalf of those of us who are as old as dirt and are still nursing our last iPod Classic along, that was a more ecologically sound solution?

RR:

Absolutely, it was. It would be a much better model to bring back.

THQ:

Yo, Apple! Hear us! Make new iPods! For the sake of the planet!

Music on an iPod.

Seriously, this was too hard to save the planet?

RR:

However, I think we’ve unfortunately gone past that.

THQ:

Damn!

Convenience versus environmental sustainability.

RR:

The streaming infrastructure just makes it too easy to have all that music at your fingertips and to be able to effortlessly switch between artists and find new music, which is obviously a positive thing in a lot of ways.

Which means it’s not embedded in our psyche anymore to download music and to run it off a single device like your iPod. Most people seem to be streaming now.

THQ:

Which is the point, yes? Streaming is the ultra-convenient alternative to all that downloading malarkey that people used to do.

RR:

Exactly, yeah.

THQ:

So take us on a tangent.

Do we think people who are now intensely familiar with the convenience of one-touch entertainment streaming would necessarily care about the environmental sustainability impact of it?

If we were to say “Stream and the planet gets it!,” would most people change their behavior? Or are we looking to put the carbon-burden onto the streaming companies or the companies that run the systems?

RR:

Yeah, I’m excited to make people care, but it’s a valid question – and it’s not one that only exists in regards to streaming. Look at our food choices. We all know it would be far better, ecologically, not to eat meat, but we choose to do it anyway. Because we like it. So it’s really hard to force people to be vegetarian music downloaders with just one device and a lunchbox full of tofu.

THQ:

Because we’re fighting against previous norms of convenience and pleasure, which makes us come off as tedious miserabilists?

RR:

Yeah – we all have choices, and of course, you can’t do everything. You just have to do the things you think are enough to try and make a difference. And, you know, do the best you can.

The irony of including this link to a video you can stream from a YouTube server somewhere is…not entirely lost on us.

 

In Part 3 of this article, we’ll dive deeper into the AI solution to call quality IRIS has developed, and explore what it can do right now – and what the hopes are for its future.

The post An environmental sustainability solution for streaming and calls appeared first on TechHQ.

]]>
AI and the future of work https://techhq.com/2023/11/ai-and-the-future-of-work/ Tue, 07 Nov 2023 16:26:01 +0000 https://techhq.com/?p=229578

The future of work was thrown into doubt by Elon Musk recently. He claimed that AI would “do everything,” making human work redundant. What would such a circumstance mean for our society? Any technology that fundamentally changes the nature of work will be bound to dictate the future of work unless those who have previously... Read more »

The post AI and the future of work appeared first on TechHQ.

]]>
  • The future of work was thrown into doubt by Elon Musk recently.
  • He claimed that AI would “do everything,” making human work redundant.
  • What would such a circumstance mean for our society?

Any technology that fundamentally changes the nature of work will be bound to dictate the future of work unless those who have previously done the work can alter the course of events.

The Luddites had some ideas about the future of work.

“Luddite Memorial, Liversedge” by Tim Green aka atoach is licensed under CC BY 2.0.

The industrial revolution brought in machines that did the work of human beings. Attempts were made to stop this process through acts of protest and/or vandalism that gave the world its understanding of the term “Luddite” – and the eventual victory of automation allowed the winners to write the victory into history, along with entries in the same history books describing Luddites as doomed to failure because they stood in the path of progress.

Other necessary jobs emerged, but the people who did them were usually different people from those whose livelihoods had been affected by the coming of machines and industrialized economic systems. Essentially, the new wave of workers comprised people who didn’t necessarily have the craftsmanship of the previous generation in the same industry: they didn’t need such peculiarities of skill and training.

The robotic revolution of the 1960s and 70s had a similar effect, reducing both skilled practitioners and whole career ladders (from apprentices to journeymen to masters) to nothing.

The point of which is this – with both revolutions, society, as we look back on it, advanced in leaps and bounds. The results, both times, were inescapably positive.

But also, in both cases, the revolutions concentrated economic power – and political power – in the hands of a small number of people who had control over the means of production, and a relatively small aspirational middle class of administrators. The mass of the working population was made subject to an ever-growing wealth and prosperity divide because their skills were not able to command adequate remuneration.

The next revolution and the future of work as we know it.

We mention all this potted industrial history because, whether or not the working class has realized it yet, we’re on the brink of a brand new revolution.

Elon Musk met with UK Prime Minister Rishi Sunak (a man officially richer than the King of England) at the recent AI Safety Summit at Bletchley Park, and this is what he had to say:

“We are seeing the most disruptive force in history here. There will come a point where no job is needed – you can have a job if you want one for personal satisfaction, but AI will do everything. It’s both good and bad – one of the challenges in the future will be how do we find meaning in life.”

His words get to the nub of the AI issue. To what extent will it be a tool we use to help us with our work so that we can lead more fulfilling lives, and to what extent will it replace human beings, because it will do the job faster, and in the eyes of bosses and consumers alike, better than the people who did the work before?

It’s the nature of the science-fiction utopian/dystopian divide. What Musk is describing could, with optimistic eyes, be seen to be the foundation of the likes of Star Trek’s Federation – smart AI getting rid of all the tedious aspects of life, freeing up human beings to better themselves in any way to which they feel drawn.

The inconvenient truths.

What that comparison ignores, though, is that a) such a society depends on the ability to meet all the basic (and indeed complex) needs of human beings without labor for payment and b) the assumption that society is responsible enough to use such technology to the benefit of all humanity. In Star Trek lore, incidentally, humankind went through cataclysmic war and horror before such a society was made real.

We are, in fact, the Before picture in any such utopian, tech-enabled future.

AI and the future of work illustrative image.

“Fanuc Robot Arcmate” by Kitmondo.com is licensed under CC BY 2.0.

And – not to put too fine a point on it, Musk’s is also a vision that’s problematic even at its very best in terms of detail.

If AI does everything, that means that the value of work, the value of knowledge held by people, and the value of extensive training by human beings is precisely nil – because AI will do any job better, even those for which people have traditionally needed extensive training. Let’s not forget that ChatGPT can already pass the US bar exam and is helping speed up both gene therapies and cancer treatment.

The death of the American Dream.

While arriving at that state would probably entail the most society-wide resistance of any revolution to date (simply by virtue of the fact that it will affect all levels of society equally), the existence of such a state of universal, equal human worthlessness is absolutely the antithesis of, to take a case at random, the American Dream, where hard work=success=money and progress up a social ladder.

If AI does all the jobs, there is no social ladder. Everybody is somewhat Communistically equal.

Also, if human work has no necessity and therefore no appropriate monetary reward, then the provision of the necessities of life would have to be dependent on hand-outs from those in power, who are still those who control the means of production.

The means of production are the AIs, of course. The Universal Basic Income is an example of the granting of basic needs, but if we’re arriving at a society in which human work is unnecessary, it would have to be massively expanded.

We’d have to feed the hungry, house the indigent, clothe the naked, and, in fact, do all the things we’re technically commanded to do in many sacred texts but somehow find ways to fail at, even in some of the most prosperoud nations ever seen in the history of the world.

In an AI-powered world, humans would have to grant equality and the means of existence with no monetary reward, or, which may be more likely given human nature, in return for some form of indentured servitude to the people and/or the state that “gave” people these things.

We might eventually arrive at a point where we in the Western world are fine with that, but a) competing cultures may not follow suit, and b) we won’t get there without something akin to the destruction of monetarily-based meritocracy, AKA capitalism or neo-liberalism.

An unlikely prophecy?

The upside, though, is that on the one hand – and certainly in the short term – Musk’s ideas are unlikely to translate into reality.

Professor Emma Parry from the Cranfield School of Management told the UK’s Press Association that while Musk’s words were helpful in terms of getting people prepared for the change to the future of work as we know it, his notion that AI could eventually do all jobs was “sensationalist” and “not helpful.”

“It is unlikely that we will see this in the near future,” she explained, “but given AI is allowing us to automate an increasing number of tasks, from routine transactions to data analysis, there is potential for workplaces to be automated completely. AI will continue to have an impact on the way we work, but it will not take away our jobs anytime soon.”

Free beer from our robots, driven by AI?

“Nøgne Ø bottling line” by Bernt Rostad is licensed under CC BY 2.0.

What, then, does the Professor see as the future of work in the age of AI?

“As long as organizations prioritize upskilling employees so that we can work alongside AI, we could even see higher levels of higher quality and enjoyable tasks, resulting in improved job satisfaction that would, in turn, benefit workplace productivity,” Professor Parry conjectured.

Without wishing to rain on the Professor’s parade, every previous revolution leading to a radical change in the future of work has tended towards workers being removed and replaced rather than upskilled – because upskilling takes economic investment and removal results in economic savings.

Safeguarding workers’ rights in the future of work.

The UK’s Trades Union Congress (TUC), a body acting towards the collective good of workers in trade unions across society, appears to have recognized the whiff of former revolutions in the march of AI.

It has launched an AI taskforce, demanding what it calls “urgent” new laws around the operation of generative AI to “safeguard workers’ rights and ensure AI benefits all.”

It’s lost on no one that the person with the most power to implement such laws is Prime Minister Rishi Sunak, who interviewed Musk and drew the “AI will do all the work” claim from him.

TUC Assistant General Secretary Kate Bell, on hearing the Musk prophecy, said, “Without proper regulation of AI, our labor market risks turning into a wild west. We all have a shared interest in getting this right.”

All except potentially those who own the large language models. The future of work and what it looks like hangs in the balance. We do not have access to act as 21st-century Luddites. The question is whether we can construct a strong enough cage of laws around AI to keep it on everyone’s side.

 

The post AI and the future of work appeared first on TechHQ.

]]>
The hidden impact of streaming on environmental sustainability https://techhq.com/2023/11/what-impact-does-streaming-have-on-environmental-sustainability/ Mon, 06 Nov 2023 10:30:36 +0000 https://techhq.com/?p=229531

• Environmental sustainability is a core consideration for many businesses in the 2020s. • But the environmental costs of audio and video are rarely counted in the equation. • Audio and video post-pandemic trends have a huge silent impact on environmental sustainability. Environmental sustainability is one of the key issues of our age. In particular,... Read more »

The post The hidden impact of streaming on environmental sustainability appeared first on TechHQ.

]]>

• Environmental sustainability is a core consideration for many businesses in the 2020s.
• But the environmental costs of audio and video are rarely counted in the equation.
• Audio and video post-pandemic trends have a huge silent impact on environmental sustainability.

Environmental sustainability is one of the key issues of our age. In particular, it’s an issue because of our ever-growing need for data centers, than which in their traditional form, a more potent, energy-sucking planet-killer it would be difficult to imagine.

We need more data centers for a whole range of reasons – our new, impetuous love affair with generative AI and machine learning is certainly a challenging drain on our resources, and one that demands more and better data center solutions.

But the post-pandemic world is different from the pre-pandemic version in a couple of other ways, too.

Firstly, the use of video conferencing to conduct essential trans-geographic business – and to keep distant familymembers connected – had been growing before the world ever learned of the existence of Covid-19.

Throughout the pandemic though, video conferencing technology came into its own, establishing a way that businesses could continue to function when proximity to other human beings was a strongly prohibited danger (except, as it turns out, in certain seats of power around the world).

And in the post-pandemic world, that ability to connect by video in several directions at once has allowed for the continued existence and increasing legitimacy of the remote and hybrid working models.

What is never especially considered is the impact of so much video and audio, being facilitated by connections and stored in data centers, on the overall picture of environmental sustainability.

And in addition to that, we’re now in a world where (apart from hipsters, obsessives and retro-snobs with vinyl connections), possessing enormous archives of physical media, be it music or video, is seen as at best charmingly quaint, and at worst, horrifyingly outdated – as though you still ran your network computer architecture on floppy discs.

It’s OK, we’ll wait while you Google what those were.

But the point is that every Spotify playlist streamed, every “Alexa, play Bluegrass Metallica Covers Playlist,” every Netflix and actually Netflix, and every demand for instant aural gratification beamed or streamed from Somewhere Else Entirely has a significant cost in terms of the overall picture of environmental sustainability – which is rarely acknowledged, let alone counted.

A fuss over nothing?

So how much of an impact on environmental sustainability does audio and video have – and should we really care, when there are giant corporations doing what we presume are probably far worse things to the environment in the name of profit?

Number 1 – we should care about everything. If there’s a sector of industry sneaking by without feeling guilty about its part in the broiling of the planet, then we should absolutely shine a line on it, because to hell with them getting away with it while you’re washing out your tin cans and separating your recycling into however many different bags it is this week.

And number 2 – it might have much more of an impact than you imagine when you stream Beyonce’s latest, or watch hour after hour of Youtube streamed cat videos.

We spoke to Rob Reng, CTO of IRIS Audio, an AI audio startup which aims to deliver cleverly clearer audio in the likes of call center settings, without the current level of energy wastage.

Naturally, having a business model like that, Rob was going to tell us about the dangers of audio and video from an environmental sustainability vantage point.

But on the other hand, he did bring a bagful of sobering statistics with him…

The Spanish Inquisition – no one’s expecting this.

THQ:

OK, Rob – we’re connecting right now via video conferencing software. What’s the impact of that on CO2 levels and environmental sustainability? Should we all just turn off our cameras? Please say yes, it would be an introvert’s dream…

What’s the scale we’re talking about here?

RR:
How about the size of Spain? That big enough for you?

THQ:

Pretty chunky, admittedly, but give us context. We can’t just say “It costs as much as Spain!” and turn our cameras off.

RR:

Well, we sort of could. There was a project back in 2019, which of course was pre-pandemic peak, which assessed the cost in terms of environmental sustainability of global streaming at around the 300 million tons of CO2 mark.

Which is roughly on a par with the entire greenhouse emissions of Spain for a year.

Are you quite all right, you seem to be boggling a little?

THQ:

A Spainsworth. Per year. In 2019, before Covid really got its boots on and we revolutionized the work culture of the world.

Environmental sustainability - streaming=Spain.

Spain – which is, let’s remember, an innocent bystander in this eco-tastrophe.

RR:

Ah. Yes. It can hit people that way. But yes – we can’t shy away from the fact that every time we’re on a meeting like this, every time we’re streaming, every time we’re downloading anything, it’s having a significant effect. There are so many of us on this planet, and there are so many people streaming content. Imagine YouTube, Spotify, Alexa, Siri – they’re on all day for a lot of people. Burning carbon as they go.

THQ:

Well, on the grounds that we can’t, for instance, reciprocally just shut Spain down every second year or so as a kind of carbon offset, what do we do about it?

Talking in the dark like heathens.

RR:

Well, as you pointed out, you don’t necessarily always need to have video on your calls. Certainly, that’s what we do in the office: if you need to have your face on for one reason or another, then by all means, have it on. But a lot of the time, we just switch our cameras off and just have the voice component.

The environmental sustainability impact of video conferencing.

No, really – that meeting should have been an email…

When it comes to voice, we were looking specifically at an article in The New Statesman in 2021, which highlighted the streaming impact of Spotify alone. Olivia Rodriguez had the biggest hit of 2021 with Drivers License, which had 1.1 billion streams. And just that song on its own was equivalent to 4000 flights from London to New York.

THQ:

Bear with us a moment, we may be having an environmental panic attack.

RR:

That’s just one song. And there are obviously millions of songs being streamed at any one time, not just from Spotify, but from Apple Music, and Amazon and everybody else.

THQ:

We just did some research while having a panic attack. Turns out there are over 300 tracks on Spotify with over a billion streams. 300 Drivers Licenses. That’s… that’s 1,200,000 London-New York flights. Queen songs on their own have been streamed 17 billion times.

And that’s just on Spotify. And it’s not counting all those additional streams that haven’t hit the golden billion yet.

Streaming has invisible environmental costs.

What price your favorite streams?

The impact on environmental sustainability.

RR:

Yyyyyeah. We’re very aware of this and the impact that audio was having. And not only that, but you have sites and services wanting to upgrade their audio, to add clarity and quality, and so upgrade the service they supply.

So if you go up to super HD, for instance, you’re multiplying the size of the file by up to 20 times. So there’s already a problem. And that’s going to make the problem even worse.

By a factor of 20.

THQ:

Well… that’s arresting.

RR:

So what we were interested in was using AI to process audio in real time. If we could train a network on what high quality audio looks like and what low quality audio looks like, and then in real time, upscale the audio from a low resolution to a high resolution at the endpoint, then you can transport and ship audio in low bandwidth –

THQ:

Which for our purposes means lower impact on environmental sustainability?

RR:

Yes, so you could send it low, but actually as a recipient, you’d hear it as if it’d been transmitted in Hi Fi.

Increasing quality has usually meant increasing the issue of environmental danger.

Raise the quality, raise the environmental damage?

A novel solution.

THQ:

Let’s just make sure we have this right. So the noise is sent in low quality, be it voice in a call center or the 1.2 billionth download of Drivers License, but there’s AI post-processing at the receiver’s end, which boosts it to high quality in situ, and so doesn’t waste the energy of streaming high quality audio, but also doesn’t punish the would-be stream queens with low quality results?

RR:

That’s the idea, yes. We began about nine months ago, and we’re just beginning to see it come to fruition. We’ve started with voice products, and we’ll move onto music once we’ve got the voice products embedded.

But yes, we’re seeing good results from the voice products already.

The green credentials of streaming could be more complicated than you think.

 

In Part 2 of this article, we’ll find out exactly how this mystic audio sorcery works – and how businesses could use it to both increase the quality of their audio and take steps towards burning less planet per quarter in carbon costs.

In the meantime, if you need us, we’ll be on an entirely gratuitous flight to New York.

The post The hidden impact of streaming on environmental sustainability appeared first on TechHQ.

]]>
Biden executive order brings us closer to regulations on AI https://techhq.com/2023/11/does-the-biden-executive-order-bring-us-closer-to-regulations-on-ai/ Wed, 01 Nov 2023 14:00:51 +0000 https://techhq.com/?p=229456

• Regulations on AI have been demanded since the second after ChatGPT arrived. • The new Biden executive order gets us closer than we’ve ever been to a framework of AI law. • It tackles eight major areas of concern with generative AI. Since generative AI became a reality in October, 2022, thanks to OpenAI... Read more »

The post Biden executive order brings us closer to regulations on AI appeared first on TechHQ.

]]>

• Regulations on AI have been demanded since the second after ChatGPT arrived.
• The new Biden executive order gets us closer than we’ve ever been to a framework of AI law.
• It tackles eight major areas of concern with generative AI.

Since generative AI became a reality in October, 2022, thanks to OpenAI and its Microsoft-backed chatbot, ChatGPT, people, organizations and governments around the world have been calling for regulations on AI.

Potential dangers of the technology have run the gamut from the standard sci-fi “Algorithmic overlords will kill us all and/or destroy the planet,” through the significantly more likely “the technology will put whole armies of people out of work,” to the most likely of all, “It’s going to have exploitation of workforces, misogyny, bigotry and all the other unfairnesses of our society baked right in and normalized.”

There have been entirely legitimate concerns on the nature, quality and bias of the data on which large language models are trained, and equally legitimate worries that, given the startlingly rapid adoption of generative AI across the business community of the world, any regulations on AI would either come too late to be effective, or be too broad to do any good.

The Biden executive order comes into being, proposing sweeping additions to regulations on AI.

Will the executive order be effective?

The European Union was first out of the gate in terms of developing regulations on AI, and while its provisions in the EU AI Act are a brave stab at delivering guardrails on AI technology, they were begun in the era before generative AI, and so while they deal deal comprehensively with pre-generative technology, their regulations on generative AI are something of a blunt instrument.

While there’s no legal framework in the world where those who are regulated get to say how far the regulations should go, OpenAI’s Sam Altman felt free to add that he felt the European approach was “overregulation,” and went on a slightly desperate last minute European tour before the provisions of the Act were made public, to get them amended.

Without regulations on AI, we’re doooooomed!

Speaking of Altman, he’s previously spoken to the likes of Senate subcommittees about what he believes – or at least is eager to make it appear that he believes – are the dangers of the technology which has made his name and fortune, up to and including complete human extinction.

While it’s worth noting that in the wake of that testimony, he floated a security technology which could allegedly keep user data safe even from increasingly sophisticated generative AI, the like of which he was also keen to develop, Altman’s been back in the headlines just this week.

While he continues to acknowledge the feasibility of some of the wilder disaster-claims for generative AI (and the fact that we’re still in the very early days of the technology’s use, despite its wild breadth of uptake and application), Altman says – probably with the most open honesty of any of his recent statements – that there’s no putting the genie of AI back in its bottle, so he wants regulations around AI that make it safe from use by bad actors, without unfairly penalizing those who are trying to use the technology to advance humanity’s capabilities.

Are regulations on AI really needed?

“You crazy kids keep it down! If I have to come in there, there’ll be trouble…”

Which brings us to the Biden administration’s executive order.

While the White House had informal talks with some of the leading players in generative AI earlier in the year, and Democratic Senator Chuck Schumer has done some work on establishing initial guidelines on the technology, the new executive order is the most significant step the US government has so far taken towards a set of regulations on AI.

There are eight fundamental principles to the executive order:

  • Standards for safety and security
  • Protecting citizen privacy
  • Advancing equity and civil rights
  • Protecting consumers, patients and students
  • Supporting workers
  • Promoting innovation and competition
  • Advancing American leadership abroad
  • Ensuring responsible and effective government use of AI

The fundamental principles are both modern in terms of the technologies to which they apply, and distinctly Bidenesque – surfacing from under the radar with little by way of advanced warning, heady with pragmatism and drenched in American motherhood and apple pie.

But there’s no denying that they also touch on many of the main concerns that have been raised with the application and use of generative AI so far.

Sam Altman of OpenAI.

“First word? Starts with D? Democtratic oversight?!” Sam Altman of OpenAI.

Breaking down the Biden order.

On safety and security: the order requires makers of powerful AI systems to share their safety test results with the US government, and instructs the National Institute of Standards and Technology (NIST) to set rigorous standards for red-team testing of the safety of such systems before they’re allowed to be released for public use.

In addition, it provides for the establishment of an advanced cybersecurity program, to find and fix vulnerabilities in critical software, and establishes a National Security Memorandum to direct further actions on AI and security, so that the US military and intelligence community are bound to use AI safely, ethically, and effectively in their missions.

On protecting privacy: the order commands Congress to pass bipartisan data privacy legislation to protect all Americans and their data. Such legislation on AI should include priority federal support for the development of privacy preserving techniques.

Such AI legislation should also develop guidelines for federal agencies, so they can assess the effectiveness of available techniques to preserve data and personal privacy in the age of generative AI.

On equity and civil rights: The likelihood of generative AI engraining social prejudices into the “way things work” has been shown time and time again. The order demands that developers address algorithmic bias, and pledges the development of best practice in critical use cases like the criminal justice system.

On consumer, patient and student protection: the order commits the government to advancing the responsible use of AI in healthcare, and to provide a system to report any issues that arise from the use of AI in a healthcare setting.

It also commits the government to developing supporting resources to allow educators to safely deploy AI in the classroom.

On supporting workers: This is one of the biggest issues, because one of the biggest fears the public has is that AI will put them out of work. The order’s response might, to some, feel a little wishy-washy – it pledges to develop best practices and principles to “address” job displacement, labor standards, workplace equity, health, and safety, and data collection.

It also commits the government to producing a report on the potential impact of AI on workplaces, and any necessary mitigation strategies as we shift from a largely human workforce to a mixed human-system workforce.

On promoting innovation and competition: the order is on firmer, if no more original ground. It will use the National AI Research Resource—a tool to provide AI researchers and students with access to key AI resources and data—and expand available grants for AI research in areas of national and international significance, like healthcare and climate change.

It will also promote the growth of a ground-up AI ecosystem by giving small developers and entrepreneurs access to technical assistance and resources, and helping small businesses commercialize AI breakthroughs. The idea behind that is not only to spread the general public’s knowledge and acceptance of generative AI, but also to ensure the technology doesn’t become a bottleneck technology of the extremely rich and the mortifyingly powerful.

On advancing American leadership abroad: possibly the most thoroughly Biden element of the order, it pledges that the US government will work bilaterally and multilaterally with stakeholders abroad to advance the development of AI and its capabilities.

Unless of course those stakeholders are Russian, Chinese, or presumably, given the latest state of the newest pre-global conflict in the world, Palestinian.

And on ensuring responsible and effective government use of AI: the order is on stronger ethical footing – it provides for the rapid access by agencies of appropriate AI technology, the development of appropriate agency guidance for the use of that technology, and the swift hiring in of expertise in such technology, so that the US government and its agencies can be as clued-up as they need to be in 2024 and beyond.

Of all the attempts so far to develop wide-ranging and effective regulations on AI, the Biden executive order is by far the most comprehensive.

How much of the order sees the long-term light of day, is taken up as a set of guiding principles internationally, or at this point even survives the 2024 presidential election, remains to be seen.

The post Biden executive order brings us closer to regulations on AI appeared first on TechHQ.

]]>