Regulatory Compliance - TechHQ Technology and business Mon, 04 Mar 2024 20:56:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Spotify, Epic decry Apple terms under EU compliance https://techhq.com/2024/03/open-letter-to-apple-from-spotify-and-epic-on-terms-and-conditions/ Tue, 05 Mar 2024 09:30:12 +0000 https://techhq.com/?p=232501

Spotify among companies complaining about Apple EU developer terms & conditions. Anti-competitive practices make sideloading more expensive. Software companies likely to keep working under existing Apple terms & conditions. With iOS 17.4 due to be released in the coming week, 30 companies have penned an open letter to the European Commission, media groups, and lobby... Read more »

The post Spotify, Epic decry Apple terms under EU compliance appeared first on TechHQ.

]]>
  • Spotify among companies complaining about Apple EU developer terms & conditions.
  • Anti-competitive practices make sideloading more expensive.
  • Software companies likely to keep working under existing Apple terms & conditions.

With iOS 17.4 due to be released in the coming week, 30 companies have penned an open letter to the European Commission, media groups, and lobby organizations, stating their concerns about Apple’s terms and conditions, which they claim will still leave the company in contravention of the EU’s Digital Markets Act.

To comply with the DMA, Apple is now allowing third-party app stores and the sideloading of applications downloaded independently. Developers will be given a choice between signing up to Apple’s new terms or sticking with the existing T&Cs, which the group claims is a “false choice.” The new terms, the signatories claim, will “hamper fair competition with potential alternative payment providers.”

Rotten apple terms and conditions illustrative image.

“Rotten apples” by fotologic is licensed under CC BY 2.0.

To aid developers in their choice, Apple provides a handy calculator to guide them through the myriad available options. Users in the EU select whether they will qualify for the App Store Small Business Program, what App Store fees they would pay, and the value of in-app purchases they predict users will pay – under new and old terms.

What will surprise absolutely no one is that developers will end up paying more money to Apple if they choose to allow their apps to be sideloaded than they currently pay under existing terms. They will also have the cost of running an app store, a customer support function, and a payment processor. For developers, keeping business as usual under Apple’s existing terms results in greater revenue. The only way to preserve income under Apple’s new terms with apps served from a third-party store is to raise the price that consumers pay.

This puts some of the more hyperbolic language of the open letter to the European Commission into context. It claims that “Apple is rendering the DMA‘s goals of offering more choice and more control to consumers useless.” Consumers will rarely have a choice to sideload an app or download it from a third-party store because no application developers will opt to make less money.

The letter states:

“New app stores are critical to driving competition and choice both for app developers and consumers. Sideloading will give app developers a real choice between the Apple App Store or their own distribution channel and technology. Apple’s new terms do not allow for sideloading and make the installation and use of new app stores difficult, risky and financially unattractive for developers. Rather than creating healthy competition and new choices, Apple’s new terms will erect new barriers and reinforce Apple’s stronghold over the iPhone ecosystem.”

Apple’s new terms do “allow for sideloading” – in this, the letter is incorrect – but its terms are deliberately anti-competitive. The company is indeed “[making] a mockery of the DMA and the considerable efforts by the European Commission and EU institutions to make digital markets competitive.”

Apple terms and conditions illustrative imagery.

Something rotten in the state of Apple? Suuuurely not? “rotten apple” by johnwayne2006 is licensed under CC BY-NC-SA 2.0.

It would be naive to believe that the signatories of the letter are beating a drum for consumers’ right to choose where they source their apps from. The motives of Epic Games, Spotify, Uptodown, et al. are as mercenary and cynical as Apple’s. They expected to make more money thanks to the DMA‘s imposition but have been thwarted, at least for now. The ‘Apple Tax’ payed by companies with apps on the App Store is a thorn in the side to shareholders dependent on Apple’s App Store.

For the next few years, European taxpayers will fund the inevitable legal battle they will wage on behalf of the likes of Spotify (2023 Q4 revenue €3.7 billion, €68 million in adjusted operating profits) and Epic Games (valued at $31.5 billion in 2023), so justice can be granted to these stalwart defenders of consumer choice.

Under the Digital Markets Act, violators may be fined up to 10% of worldwide global turnover, which would amount to approximately $38 billion plus change. Likely for Apple, it won’t come to that, but as ever, Cupertino can afford its lawyers’ salaries for a few years until it can find ways to recoup the costs of operating in a competitive market – at least, in the EU. Developers and consumers in the US, UK, and elsewhere can look forward to business as usual.

Track available on both iTunes and Spotify…

The post Spotify, Epic decry Apple terms under EU compliance appeared first on TechHQ.

]]>
Exploring the groundbreaking EU AI Act https://techhq.com/2023/12/what-will-the-eu-ai-act-say/ Tue, 12 Dec 2023 09:30:40 +0000 https://techhq.com/?p=230625

Transparency requirements have been introduced by the EU in the AI Act for developers of general-purpose AI systems like ChatGPT. It also bans unethical practices, such as indiscriminately scraping images from the internet for facial recognition databases.  Fines can range up to €35 million or 7% of global turnover, with the severity determined by the... Read more »

The post Exploring the groundbreaking EU AI Act appeared first on TechHQ.

]]>
  • Transparency requirements have been introduced by the EU in the AI Act for developers of general-purpose AI systems like ChatGPT.
  • It also bans unethical practices, such as indiscriminately scraping images from the internet for facial recognition databases. 
  • Fines can range up to €35 million or 7% of global turnover, with the severity determined by the nature of the infringement and the company’s size.

In a significant stride toward regulating the rapidly evolving field of AI, the European Union has recently achieved a milestone with the approval of the EU AI Act. This landmark legislation marks a defining moment for the region, setting the stage for comprehensive guidelines governing the development, deployment, and use of AI technologies.

It all started in April 2021, when the European Commission proposed an AI Act to establish harmonized technology rules across the EU. At that time, the draft law might have seemed fitting for the existing state of AI technology, but it took over two years for the European Parliament to approve the regulation. In those 24 months, the landscape of AI development has been far from idle. What the bloc did not see coming was the release and proliferation of OpenAI’s ChatGPT, showcasing the capability of generative AI–a subset of AI that was foreign to most of us. 

As more and more generative AI models entered the market following the dizzying success of ChatGPT, the initial draft of the AI Act took shape. Caught off guard by the explosive growth of these AI systems, European lawmakers faced the urgent task of determining how to regulate them under the proposed legislation. But the European Union is always ahead of the curve, especially when regulating the tech world.

So, following a long and arduous period of amendment and negotiation, the European Parliament and the bloc’s 27 member countries finally overcame significant differences on controversial points, including generative AI and police use of face recognition surveillance, to sign a tentative political agreement for the AI Act last week.

“Deal!” tweeted European Commissioner Thierry Breton just before midnight. “The EU becomes the first continent to set clear rules for using AI.” This outcome followed extensive closed-door negotiations between the European Commission, European Council, and European Parliament throughout the week, ending after three days of rigorous negotiations spanning thirty-six hours.

The EU AI Act - the first of many? Source: Thierry Breton on X

Yes, but how good is it? Source: Thierry Breton on X

“Parliament and Council negotiators reached a provisional agreement on the AI Act on Friday. This regulation aims to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI while boosting innovation and making Europe a leader. The rules establish obligations for AI based on its potential risks and level of impact,” the European Parliament said.

The AI Act was initially crafted to address the risks associated with specific AI functions, categorized by their risk level from low to unacceptable. However, legislators advocated for its extension to include foundation models—the advanced systems that form the backbone of general-purpose AI services, such as ChatGPT and Google’s Bard chatbot.

Dragoș Tudorache, a member of the European Parliament who has spent four years drafting AI legislation, said the AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for citizens and democracies against any abuses of technology by public authorities. 

“It protects our SMEs, strengthens our capacity to innovate and lead in AI, and protects vulnerable sectors of our economy. The EU has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future,” he added.

What makes up the EU AI Act?

While the EU AI Act is notable as the first of its kind, its comprehensive details remain undisclosed. A public version of the AI Act is not expected for several weeks, making it challenging to provide a definitive assessment of its scope and implications unless leaked. 

Members of the European Parliament take part in a voting session during a plenary session at the European Parliament in Strasbourg, eastern France, on November 22 , 2023. (Photo by FREDERICK FLORIN / AFP).

Members of the European Parliament take part in a voting session during a plenary session at the European Parliament in Strasbourg, eastern France, on November 22 , 2023. (Photo by FREDERICK FLORIN / AFP).

Policymakers in the EU embraced a “risk-based approach” to the AI Act, focusing intense oversight on specific applications. For instance, companies developing AI tools with high potential for harm, especially in areas like hiring and education, must furnish regulators with risk assessments, data used for training, and assurances against damage, including avoiding perpetuating racial biases. 

The creation and deployment of such systems would require human oversight. Additionally, specific practices, like indiscriminate image scraping for facial recognition databases, would be banned outright. But, as stated by EU officials and earlier versions of the law, chatbots and software producing manipulated images, including “deepfakes,” must explicitly disclose their AI origin. 

Law enforcement and governments ‘ use of facial recognition software would be limited, with specific safety and national security exemptions. The AI Act also prohibits biometric scanning that categorizes people by sensitive characteristics, such as political or religious beliefs, sexual orientation, or race. “Officials said this was one of the most difficult and sensitive issues in the talks,” a report by Bloomberg reads.

While the Parliament advocated for a complete ban last spring, EU countries lobbied for national security and law enforcement exceptions. Ultimately, the parties reached a compromise, agreeing to restrict the use of the technology in public spaces but implementing additional safeguards.

The suggested legislation entails financial penalties for companies breaching the rules, with fines ranging up to €35 million or 7% of global turnover. The severity of the penalty would be contingent on the nature of the violation and the size of the company. While civil servants will finalize some specifics in the coming weeks, negotiators have broadly agreed on introducing regulations for generative AI.

Some 85% of the technical wording in the bill has already been agreed on, according to Carme Artigas, AI and Digitalization Minister for Spain (which currently holds the rotating EU presidency).

So far, EU lawmakers have been determined to reach an agreement on the AI Act this year, in part to drive home the message that the EU leads on AI regulation, especially after the US unveiled an executive order on AI and the UK hosted the international AI Safety Summit—China also developed its own AI principles

However, next year’s European elections in June are also quickly closing the window of opportunity to finalize the Act under this Parliament. Despite these challenges, the EU’s success in finalizing the first comprehensive regulatory framework on AI is impressive.

 

The post Exploring the groundbreaking EU AI Act appeared first on TechHQ.

]]>
EU moves closer to generative AI regulation https://techhq.com/2023/06/eu-generative-ai-regulation-smes-social-media-foundation-models/ Fri, 16 Jun 2023 21:17:11 +0000 https://techhq.com/?p=225547

• The EU has voted to accept draft language on generative AI regulation. • The EU AI Act straddles the pre-and-post-generative AI eras. • Concessions and exemptions exist for SMEs. The EU has gained significant plaudits in recent months by being the first jurisdiction to come anywhere close to having regulation in place to govern... Read more »

The post EU moves closer to generative AI regulation appeared first on TechHQ.

]]>

• The EU has voted to accept draft language on generative AI regulation.
• The EU AI Act straddles the pre-and-post-generative AI eras.
• Concessions and exemptions exist for SMEs.

The EU has gained significant plaudits in recent months by being the first jurisdiction to come anywhere close to having regulation in place to govern the use and application of generative AI technology.

EuroParl UK on Twitter.

The news of the EU vote was shared widely on social media.

And this week the EU AI Act took significant steps towards becoming law across the EU jurisdiction. The initial text of the draft legislation that could eventually become the fully-fledged EU AI Act was approved by the EU’s main legislative branch.

But this is not evidence of the EU being either prescient or swift to act – anyone with significant experience of the EU’s legislative processes knows it’s as swift as a housebrick with a hernia.

It’s only as far ahead as it is because when it started thinking about AI, it wasn’t in any sense contemplating a world that had to contend with the complexities of generative AI.

A gumbo of tech concerns.

That’s why the draft language of the regulation that was approved this week lump a lot of seemingly disparate areas of “AI” technology together, including AI-enhanced biometric surveillance, emotion recognition and predictive policing, alongside generative AI like ChatGPT.

And where generative AI is mentioned, it is mentioned in relatively broad – but distinct and important – strokes, such as the high-risk status of systems used to influence voters in elections.

The EU voted to push for generative AI regulation.

How will the eventual EU AI Act look? It’s still too early to tell.

Elements of the draft regulation that declare that generative AI systems must disclose that AI-generated content is AI-generated content, rather than individually human-created content, may not only make lawyers everywhere rub their hands, but might also be incredibly complex to implement in real terms.

As more and more companies across the world and across the business sphere implement some version of generative AI in their back-ends or subsystems to smooth out business processes, there may be further complication in adhering to that element of the regulation – assuming it makes it all the way from the draft language to the statute books.

What AI would be forbidden from doing.

If the draft regulation becomes essentially the body of the EU AI Act, there are several distinct areas of activity from which AI would be effectively banned. These areas of activity would be those judged to carry “an unacceptable level of risk to people’s safety,” including areas that MEPs judge to be intrusive or discriminatory.

The list of those areas is telling in terms of the relatively long gestation period of even the draft language of he regulation, with generative AI specifically barely featuring.

They include:

  • “Real-time” remote biometric identification systems in public spaces;
  • “Post” remote biometric identification systems – for everyone except law enforcement agencies prosecuting serious crimes, and with judicial authorization;
  • Biometric categorization systems using sensitive characteristics including gender, race, ethnicity, citizenship status, religion, political orientation;
  • Predictive policing systems (including systems based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

The list – while it represents significant progress in terms of technology regulation and guidance – shows the nature of the fears that were most prevalent when the EU AI Act was first being drawn up in 2018-19.

High-risk AI.

The draft regulation language does of course include generative AI, but not within those initial concerns.

It goes on to define “high-risk AI” – those that “pose significant harm to people’s health, safety, fundamental rights or the environment” as including systems used to influence voters and the outcome of elections, and, in what may come as a blow to the likes of Meta, recommender systems used by social media platforms with over 45 million users.

We may as yet be doing Meta a disservice, but it’s highly plausible the company may organize a resistance to that particular addition to the language of the regulation between now and it becoming law.

And when it comes to specifically dealing with generative AI in the sense with which everybody is already familiar with it, the draft regulation language does cut through some of the knots of circular thinking and speculation which, for instance, US legislators have yet to fully do ahead of coming up with their own generative AI regulations.

Providers of foundation models will have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law), and register their models in the EU database before their release would be allowed on the EU market.

Rules without – as yet – much meat on their bones.

Generative AI systems based on such models, like ChatGPT, Bard, and others, would have to comply with transparency requirements. That means they would not only have to clearly state what content was generated by generative AI, they would also have to “help distinguish deepfake images from ‘real’ ones.”

The draft language is tellingly lite on information as to how it believes generative AI systems would be able to help do that. At this stage, it’s more a decree than it is a formula for action.

There are strong demands in the draft regulation – “Make it so!” – without too much detail on how to get it done.

Generative AI would also have to ensure safeguards against generating illegal content – again, more of a decree than a recipe at this stage – and, in a relatively obscure assertion that may be what led OpenAI’s CEO Sam Altman to describe the draft language as “over-regulation,” companies developing generative AI based on large language models would be required to provide detailed summaries of the copyrighted data used for their training – and it would have to be publicly available.

Sam Altman in Paris recently, aiming to water down the wording of the EU's generative AI regulation.

Sam Altman in Paris recently, aiming to water down the wording of the EU AI Act. Source: JOEL SAGET / AFP

Notes of hope.

While some of these elements may need significant refining and expansion before they can constitute any effective regulation around generative AI, there were notes of hope in a sequence of compromises, agreed through committees.

There had been significant concerns that while the multi-million-dollar companies behind the leading generative AI might be able to weather the storm of the regulation, SMEs and other smaller – or smaller-budgeted – organizations might not be able to survive the rigors of compliance and the potentially destructive costs of inadvertent failure to abide by the rules.

The concessions and compromises included clauses aimed at boosting AI innovation and the growth of an SME culture of generative AI application and use. That means there are exemptions in the draft regulation language that would protect SMEs, non-profit organizations, and small free software projects up to the size of micro-enterprises.

Honing the blade of law.

There remains a lot of work to do to turn the current draft of the EU AI Act into a workable set of regulations – and there may well be significant lobbying to get through along the way.

But the vote to approve at least the draft wording puts the EU significantly ahead of the game when it comes to working out ways to deal responsibly with a technological chimera that, for instance, the US has yet to seriously grapple with.

The post EU moves closer to generative AI regulation appeared first on TechHQ.

]]>
Twitter in transphobia storm amid executive exits https://techhq.com/2023/06/twitter-loses-two-top-content-moderators-while-caught-in-transphobic-content-battle/ Mon, 05 Jun 2023 09:00:48 +0000 https://techhq.com/?p=225172

• Twitter loses two top content moderators in two days. • Musk intervenes, removing “hateful content” warning from transphobic video. • EU officials to assess Twitter’s moderation policies. Trigger warning – images of anti-trans tweets included. As digital trust and the nature of objective truth become increasingly hazy concepts – and yet increasingly vital commodities... Read more »

The post Twitter in transphobia storm amid executive exits appeared first on TechHQ.

]]>

• Twitter loses two top content moderators in two days.
• Musk intervenes, removing “hateful content” warning from transphobic video.
• EU officials to assess Twitter’s moderation policies.

Trigger warning – images of anti-trans tweets included.

As digital trust and the nature of objective truth become increasingly hazy concepts – and yet increasingly vital commodities in the tech world, Twitter under Elon Musk has displayed its now legendary sense of timing, with the resignation of Ella Irwin, the company’s head of trust and safety – and the reported exit of A.J. Brown, Twitter’s head of brand safety and ad quality the following day.

So far, perhaps, so ordinary – people resign from top jobs in the tech industry with a regularity that if not exactly tedious is at least relatively predictable.

The decimation of the content moderators.

What’s more, in 2023, people resigning from Twitter has become almost comically commonplace.

Musk himself has described the organization before his takeover as “absurdly overstaffed,” and claimed that the loss of 80% of its previous workforce had left Twitter “working better than ever.”

That’s a statement belied by several outages since the staff numbers were decimated, and a detectable exodus of some high-profile Twitter evangelists since Musk saw fit to reinstate several controversial accounts that had previously been banned.

He explained his thinking to right-wing agitprop Fox News host, Tucker Carlson.

However commonplace people leaving their jobs at Twitter may feel by June 2023, the timing and the circumstances of Irwin and Brown’s departures could be particularly significant – even though neither executive has given either a reason or a context for their leaving.

Timing is everything.

Just weeks from now, Twitter is due to be examined by officials from the European Union on its handling of user content.

Given recent mega-fines handed to various parts of Mark Zuckerberg’s Meta empire by EU data privacy enforcers (over $1bn most recently), Twitter might well have wanted to get a clean bill of health from those examiners.

Losing two of your top content moderation officers just weeks before you have to endure the scrutiny of people who have made it clear that the size and influence of your social media platform holds no fear for them, and that they will fine you millions, or now even billions of dollars, might therefore be considered to be, at the very best, an unfortunate sequence of events.

But even that optimistic appraisal looks increasingly unlikely, as the company has found itself in the center of a culture war which – depending on how much coincidence you believe in – might have just happened to hit at exactly the same time as the departure of Irwin, but probably didn’t.

What Is a Woman?

The war began with conservative outlet The Daily Wire. It has a self-described “documentary” coming out soon entitled What Is a Woman?

As is to be expected from a conservative outlet, the “documentary” takes a conservative slant on gender and transgender issues. Where it runs into trouble is that it was made, according to presenter Matt Walsh, “in opposition to gender ideology.”

‘Gender ideology’ – for those new to the culture wars – is what people with anti-trans views call the reality of trans existence, the visibility and call for equality of trans people, and any social accommodations made to allow trans people to comfortably exist in a generally trans-exclusionary society.

In other words, something made specifically “in opposition to gender ideology” is made in opposition to both verifiable reality and the notion of acceptance of people with different life-journeys to those of the cisgender majority.

It is still technically possible to make a documentary on those terms and not fall into the territory of outright transphobia. It’s extraordinarily difficult, because the premise denies the lived experience of trans people, but it is just about possible, if you really believe there’s a) a debate to be had on the subject, and b) you tread very carefully.

It’s reported that the documentary contains at least two apparently deliberate instances of the misgendering of trans people. As such, it meets the agreed definition of transphobic content.

The Musk intervention – part 1.

What has any of this to do with Twitter, Musk, and the upcoming content investigation by the EU? When news of the upcoming documentary broke, Twitter’s content moderation team advised that it would be labelled as “hateful content.”

According to The Daily Wire, that was down to the two instances of misgendering, though it’s arguable that the whole concept of the piece might have been enough to get it flagged as hateful content under the existing Twitter reporting rules, despite what followed potentially rendering them meaningless.

Musk personally intervened on Thursday, June 1, saying that the labelling by the content moderation team was “a mistake by many people at Twitter,” and confirming that the “documentary” would be allowed to run without its “hateful content” label.

Ella Irwin, head of trust and safety, left the company the same day. Without explicit comment from either Irwin or Musk on the reasons behind her departure, it is technically impossible to draw reliable inferences from this fact.

The Musk intervention – part 2.

However, the following day, June 2, Musk doubled-down on his support of the documentary, pinning a tweet to the top of his account, saying “Every parent should watch this.”

Twitter boss promotes anti-trans "documentary."

The same day, he retweeted a story from Breitbart News that described anti-trans acts by various states as “protecting children from mutilation,” and highlighted President Biden’s opposition of those actions. Musk tweeted only the word “Insane” – seemingly referring to Biden’s pro-trans stance in his Pride month statement.

Twitter boss retweets Breitbart.

A.J. Brown is reported to have left the company on June 2.

Why does any of this matter? Well, naturally, it matters to trans Twitter users, who would be within their rights to regard these actions by the platform’s owner as the creation (or perpetuation) of an online environment innately hostile to their existence.

The timing and the fines.

But to return to the impending inspection by the EU, these incidents could hardly be timed more poorly if they’d been planned to blow up the company’s bottom line for the year.

The meeting with EU officials is set to determine whether Twitter is compliant with some fairly strict new content moderation rules to which “very large online platforms” will need to adhere if they’re to be allowed to operate within the EU.

It’s possible the new moderation rules will come into force as soon as August 2024 as part of the Digital Services Act.

Musk’s stewardship of Twitter throughout the course of late 2022 and 2023 has reportedly already caused the EU some significant concern over content moderation – he allowed some highly contentious accounts to be reactivated, and has generally driven the tone of the platform further to the right. His policy of paid-for verification has been seen as monetizing authenticity – and therefore destroying it.

On top of which, losing both his head of trust and safety and his head of brand safety and ad quality within the space of two days, personally revoking the standard on what constitutes hateful speech when it comes to anti-trans content, and then doubling-down on the questionable material with a pinned tweet and a supportive statement are all likely to alarm the EU officials when they visit Twitter’s HQ in San Francisco. The visit, it should be noted, is scheduled to take place during Pride month.

If Twitter faces a hostile examination, it won’t be its first – last month, the company withdrew from the EU’s code of conduct on disinformation, saying it felt it had “no alternative” but to do so.

Thierry Breton, a leading EU commissioner, was pointed in his response. “Obligations remain,” he said. “You can run but you can’t hide.”

EU officials may well feel similarly about the fitness for purpose of Twitter’s content moderation strategy – especially if the two posts remain unfilled by the time the assessment is made.

If Twitter is found to be noncompliant with the DSA rules, it could be fined up to 6% of its global annual revenue. That’s a fine Twitter desperately needs to avoid – but from which, to re-quote Breton, it would not be able to hide simply because it doesn’t want to pay.

 

The post Twitter in transphobia storm amid executive exits appeared first on TechHQ.

]]>
OpenAI CEO backs down over European AI Act https://techhq.com/2023/05/openai-ceo-backs-down-over-european-ai-act/ Fri, 26 May 2023 21:49:40 +0000 https://techhq.com/?p=225021

Sam Altman, CEO of OpenAI, the Microsoft-funded creator of the ChatGPT and GPT-4 generative AIs, has backtracked on comments that the company would stop operating in Europe if the EU’s Act to regulate generative AI technology is too strict. Initially, Altman flew a weather balloon of public opinion over the planned European AI Act, which... Read more »

The post OpenAI CEO backs down over European AI Act appeared first on TechHQ.

]]>

Sam Altman, CEO of OpenAI, the Microsoft-funded creator of the ChatGPT and GPT-4 generative AIs, has backtracked on comments that the company would stop operating in Europe if the EU’s Act to regulate generative AI technology is too strict.

Initially, Altman flew a weather balloon of public opinion over the planned European AI Act, which will be the first serious attempt to fold restrictions and regulations on generative AI by any national (or in this case, international) power into law.

The regulated against regulation.

He voiced concerns over the extent of the proposed Act, saying on Wednesday this week that “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it.”

Unimpressed by this language, European lawmakers were quick to correct Mr. Altman’s assertions, saying the draft Bill was not up for alteration, and one Romanian Member of the European Parliament, Dragos Tudorache, confirmed that he did “not see any dilution happening any time soon.”

His initial comments have been characterized by some observers as saber-rattling to water down the Bill, on the basis that generative AI has already been widely adopted by companies in almost every industry imaginable, and so could be seen as too important a development to allow to fail.

If that were the case, Altman would have significantly miscalculated on two fronts – firstly of course, while it has by far the greatest name recognition (despite having by far the clunkiest name), ChatGPT is in no sense the only game in town as far as European countries and businesses are concerned, and has not existed long enough to generate any particularly strong brand loyalty among its user-base.

Whether corporate customers went to Google’s Bard, any of the other giant players, or went with more bespoke, open-source-based solutions, the sudden absence of ChatGPT from the European market would be less a catastrophe, and more the removal of an apex predator from the generative AI food chain, a gap in the market that competitors would be eager to fill.

And secondly, the EU has a fearsome reputation of calling companies – and even countries – to account, rather than bending to their subtle hints that it needs to change the way it does things or the companies will walk away.

Take a look at Meta. Take a look at Brexit.

Just last week, the Irish Data Protection Commission fined Meta over $1.3bn for data privacy infractions. The threat of a walk away from OpenAI would barely raise a Gallic shrug.

Within two days of his initial comments, Altman released a Tweet about having had “a very productive week of conversations in Europe about how to best regulate AI.” He added that OpenAI was “excited to continue to operate here and of course has no plans to leave.”

What is indicated about this somewhat farcical drama is an interesting duality of approach. Just weeks ago, Altman spoke in front of the US Congress, and agreed that generative AIs like ChatGPT and GPT-4 – and both their competitors and their successors – would benefit from some regulation.

He even shared insights into how such regulation might work, given the rapid pace of generative AI development and the legendarily glacial speed of the US legislative process – a real discrepancy, which could, as it stands, see any regulation become meaningless by the time it was ratified, given the advancement of the regulated technologies in the intervening period.

But the idea of the European AI Act being “over-regulation” suggests the notion that those who are being regulated get to say exactly how regulated they are prepared to be. This is in a very real sense not how regulation is supposed to work.

Cynics might argue that in a world drenched in money and power, it is how it actually works – after all, Meta will gladly pay as little of its recent mega-fine as possible, rather than amend its core business model, which would lose it significantly more money.

Development of the Bill.

But in a market where it is by no means the only available player, OpenAI may have overestimated its importance in claiming the AI Bill is over-regulation. Significantly, Google’s chief executive, Sundar Pichai, was in Europe at the same time as Altman, and likely with similar motives – to steer the language of the Bill’s draft.

Ironically perhaps, Google would probably stand a better chance of moving the EU lawmakers, given its much longer standing in the European business community and its broader suite of products and services, giving it a much fuller toolbox of influence-levers than OpenAI has, even with backing from Microsoft.

But as Dutch MEP Kim van Sparrentak, who has worked on the drafting of the Bill, noted drily after Altman’s climbdown, “Voluntary codes of conduct are not the European way. I hope we… will ensure these companies have to follow clear obligations on transparency, security and environmental standards.”

The European Bill has been in development for some time – and the only reason it’s anywhere near ready now (with ratification still to be gone through before it likely becomes law in 2025) is that it was a Bill that started off with a much narrower remit, applying to “high-risk” uses of AI, as in medical devices.

Its scope was only broadened to include generative AI in late 2022, precisely because of the launch of ChatGPT.

The thorny aspects.

The Bill as it stands would make it incumbent on any company making foundation models to identify the risks inherent in those models, and try to minimize those risks before the models were released.

Where Altman might find support for his “over-regulation” stance is in the fact that the Act would also make the model-makers – OpenAI, Google, Alibaba, Meta, et al – partly responsible for how their generative AI systems were used – even in cases where the makers had zero control over the applications to which their products were put.

So, for instance, in the case where open-source coders recently got their hands on the foundation model for Meta’s LLaMA, under the Bill as it stands, Meta could potentially stand partly responsible for any European versions created and distributed by third parties.

The naked data.

It’s likely though that the thing dripping ice water down the spines of Altman and Pinchai is the provision in the Bill that would make generative AI companies publish summaries of the data used to train their models.

Google voluntarily did something approaching that level of openness with its Bard generative AI, and the results were… interesting, suggesting that non-factual, inaccurate, and potentially PII data may have formed part of the training data.

OpenAI has yet to reveal the scope and nature of its training data publicly.

It’s unclear exactly what “discussions” Altman had within the 48 hours between toying with the idea of leaving Europe altogether and confidently assuring Twitter that there were no plans to do so.

But, for now at least, the tantrum in a teacup appears to be over. What happens en route to the Bill becoming the EU AI Act in 2025 remains to be seen.

The post OpenAI CEO backs down over European AI Act appeared first on TechHQ.

]]>
Biometric Information Privacy Act (BIPA) – a data protection fail? https://techhq.com/2023/03/biometric-information-privacy-act-bipa-a-data-protection-fail/ Fri, 24 Mar 2023 15:28:02 +0000 https://techhq.com/?p=222470

Biometrics – fingerprints, retina scans, iris patterns, voiceprints, facial recognition features, and other uniquely identifiable human attributes – give developers the option to improve device security. As anyone who has Touch ID or Face ID services enabled on their smartphone will know, biometrics are convenient for unlocking your device or authorizing a payment. Users don’t... Read more »

The post Biometric Information Privacy Act (BIPA) – a data protection fail? appeared first on TechHQ.

]]>

Biometrics – fingerprints, retina scans, iris patterns, voiceprints, facial recognition features, and other uniquely identifiable human attributes – give developers the option to improve device security. As anyone who has Touch ID or Face ID services enabled on their smartphone will know, biometrics are convenient for unlocking your device or authorizing a payment. Users don’t need to remember a fingerprint, for example, and modern data capture methods are quick and easy to use. But, as Illinois’ Biometric Information Privacy Act (BPIA) points out, biometrics are different from other unique identifiers such as passwords or social security numbers.

Users can’t easily reset their biological information. And when the Biometric Information Privacy Act was passed in 2008, concern was growing about what would happen if biometric data was compromised and fell into the hands of bad actors. At the time, individuals had no recourse if their biometric data was stolen, and state legislators agreed that some form of protection should be put in place. Pilot studies showed that biometrics are successful in combating fraud, and the use of finger-scanning technologies was growing to secure financial transactions. But the fear was that if users had no recourse for incidents of identity theft, they would withdraw from using biometric systems and progress in fighting financial crime and other fraudulent activity could stall.

And before we dig into the weeds of where things went wrong, it’s worth celebrating the progress that’s been made in the use of biometrics. On TechHQ, we’ve highlighted how voiceprints that incorporate hundreds of other identity signals, such as the cadence at which users enter their details on a keypad, can out-perform conventional knowledge-based authentication (KBA) screening questions used widely by contact centers to secure customer accounts.

Biometrics are useful authentication tools, but they are not secrets, and developers should take that into account when incorporating them into designs. And, as we’ve written previously, if you’re relying on biometrics to possess the characteristics of a key – to be secret, to be random, have the ability to be updated, or reset – then you’re staring at a major security problem.

Class action concerns

The issue with Illinois’ Biometric Information Privacy Act (BIPA) turned out not to be a security problem, as examples of biometric data falling into the hands of bad actors and being misused are hard to find. What happened instead, is that the legislation has backfired. Rather than nurture the adoption of biometrics to combat fraud, the act, in effect, deters companies from using biometric technology. And the reason for firms to hesitate before implementing biometrics is the rise of multi-million dollar class actions such as those faced by Google and Facebook.

BIPA is unusual in that a private cause of action can be brought for violations, making it relatively straightforward for owners of biometric data to seek damages. Class actions, which group claimants who believe that their biometric data has been mishandled, can send the potential damages faced by firms sky-high. And companies that use biometric technology could be subjected to fines totalling hundreds of millions of dollars.

Pushing up the number of claims was a key decision made by the Illinois Supreme Court in 2019, which held that victims didn’t have to show that any harm had been caused through mishandling of their biometric data. The test case involved a child who’d had his thumbprint scanned and stored by an amusement park to ride the various attractions using a ticketless pass.

Illinois’ Biometric Information Privacy Act (BIPA) requires consent from subjects, or their legally authorized representatives, for the collection and storage of biometric data. And operators must make it clear to subjects – for example, customers or employees – that a biometric identifier or biometric information is being collected or stored, and the specific reason for its use, as well as the length of time that the data will be stored.

Facebook was judged to have fallen foul of the legislation when a class action was brought against the social media giant, which ‘claimed Facebook collected and stored the biometric data of Facebook users in Illinois without the proper notice and consent in violation of Illinois law as part of its “Tag Suggestions” feature and other features involving facial recognition technology’. Facebook, which denies it violated any law, agreed to pay USD 550 million to settle the privacy lawsuit.

Time to reconsider

And, if Facebook’s experience didn’t make companies think twice about using biometric technology, then the prospect of even larger fines could be the final straw. Businesses are urging the Illinois Supreme Court to reconsider a recent decision that appears to pave the way for claimants to pursue separate cases for each time that biometric data is collected or transmitted. Separating each fingerprint scan, for example, into a separate claim would, as observers have noted, result in “annihilative liability” for businesses.

“These results are absurd,” Lauren Daming, an attorney at US law firm Greensfelder, told TechHQ. “It’s time for legislators to step up and make some changes.” BIPA was intended to protect consumers from biometrics getting into the hands of bad actors, not to drive companies out of business. What’s worse is that the legislation is a pathfinder for biometric information privacy. Currently, aside from Illinois, there are only two other states that have biometric privacy laws – Texas and Washington – so it’s important for issues to be resolved before BIPA serves as a blueprint more widely.

The post Biometric Information Privacy Act (BIPA) – a data protection fail? appeared first on TechHQ.

]]>
EU AI Act: ChatGPT stirs up legal debate on generative models https://techhq.com/2023/03/eu-ai-act-chatgpt-stirs-up-legal-debate-on-generative-models/ Tue, 07 Mar 2023 17:31:41 +0000 https://techhq.com/?p=221966

In February 2020, the European Commission opened its public consultation on what’s become known as the EU AI Act. The proposal endeavors to enable the societal benefits, economic growth, and competitive edge that artificial intelligence (AI) brings, while – at the same time – protecting EU citizens from harm. AI systems, as the European Commission... Read more »

The post EU AI Act: ChatGPT stirs up legal debate on generative models appeared first on TechHQ.

]]>

In February 2020, the European Commission opened its public consultation on what’s become known as the EU AI Act. The proposal endeavors to enable the societal benefits, economic growth, and competitive edge that artificial intelligence (AI) brings, while – at the same time – protecting EU citizens from harm. AI systems, as the European Commission points out in its summary document, can create problems. And that has turned out to be somewhat of an understatement.

The EU AI Act is drafted to encourage the responsible deployment of AI systems, which fall into three categories – unacceptable risk, high-risk, and low (or minimal) risk. And the legislative proposal targets a number of scenarios that were prominent at the time of its creation, such as rising numbers of cameras in public places. In this case, the EU AI Act takes a stance against blanket surveillance – prohibiting the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. Although, readers of the EU AI Act will note that certain limited exceptions apply.

There are other activities that fall into the unacceptable risk bucket. AI-based social scoring – for example, denying citizens access to services based on content they’ve posted online or through social media – is also forbidden. But it’s the next category that has got the European Commission into hot water based on their decision to throw ChatGPT and other generative AI models into the high-risk pot. “It’s a very lazy reaction [by regulators],” Nigel Cannings, CTO of Intelligent Voice, told TechHQ.

Compliance and enforcement

The EU AI Act, which didn’t anticipate the rapid rise of generative AI, needs to be voted on by MEPs to move ahead. And for officials, this now means agreeing on how to handle generative AI models that underpin services such as OpenAI’s ChatGPT and Microsoft’s upgraded Bing search. What’s more, the rapid integration of generative AI features across a wide range of products from image libraries to contact center automation services means that a huge number of firms could be affected.

Based on the wording of the EU AI Act, AI systems that fall into the high-risk category will be subject to a new compliance and enforcement system. Conformity assessments would need to be carried out ahead of launch. Plus, developers will be required to register their standalone high-risk AI systems in an EU database to – in the words of the European Commission – ‘increase public transparency and oversight and strengthen ex post supervision by competent authorities’.

Inevitably, these regulatory requirements will come at a price. Microsoft and other tech giants have pockets deep enough to cover the costs, but what about small and medium-sized enterprises (SMEs) such as Intelligent Voice? An impact assessment report on the ethical and legal requirements of AI published by the European Commission estimates that the cost of compliance could equate to 4-5% of the investment in high-risk applications. And verification charges could bump up those fees by an additional 2-5%.

Dealing with the consequences of large language models is likely to take more than just shoehorning generative AI systems into legislation that was written for a different time. Back then, concerns centered around the rise of military robots, predictive policing, mass surveillance, and the risks of automation on employment prospects and the provision of financial services – to give a few examples. And now we have ChatGPT, which adds its own set of security headaches to the list. “Regulators cannot keep up with the changes in the landscape,” said Cannings. “And people are skirting over some of the problems.”

Data cleaning concerns

Advanced chatbots such as OpenAI’s ChatGPT have been fine-tuned to optimize their conversational capabilities. And part of this process involves data cleaning. Large language models trained on vast datasets scraped from the web can teach advanced chatbots some bad habits, including the ability to regurgitate vile words and phrases. One way of removing offensive material is to introduce humans in the loop to read and label the text so that it can be censored. But this can prove to be traumatic for the teams of workers involved, as highlighted by a recent investigation by Time into the methods used to make ChatGPT less toxic.

Other issues to consider include the vast amounts of energy that are used to train giant AI systems. The energy required to train AlphaGo to beat a grandmaster at the game of Go would’ve been sufficient to power a human’s metabolism for a decade. And large language models are even hungrier for energy, which becomes clear when you look at the infrastructure that’s being used by OpenAI. According to Microsoft, which hosts a supercomputer custom designed for OpenAI to train that company’s AI models, the setup features more than 285,000 CPU cores and 10,000 GPUs.

Estimates vary for the power consumed in training and operating large language models, and the tech giants are quick to point to data centers supplied with green energy. But the scale of the energy use remains sizable. A study dubbed Carbon Emissions and Large Neural Network Training [PDF] published in 2021, which had input from Google, helps to picture the size of the emissions. Crunching the numbers, the authors found that training a large natural language processing model could generate emissions equivalent to more than three round trips made by a passenger jet flying between San Francisco and New York.

Bringing these issues out into the open is the first step in learning how to manage responsibly a future that now includes generative AI. And legislation such as the EU AI Act will need to be flexible enough to adapt to what could continue to be a surprising sequence of ChatGPT-like breakthroughs.

The post EU AI Act: ChatGPT stirs up legal debate on generative models appeared first on TechHQ.

]]>
The Twitter ban problem https://techhq.com/2022/06/twitter-bans-discussion-what-are-the-rules-stop-review/ Mon, 13 Jun 2022 23:01:57 +0000 http://dev.techhq.com/?p=216059

Technology maverick (and world’s richest man) Elon Musk dominated headlines for a while recently with his will-he-won’t-he attempt to purchase social network Twitter. Among various factors the Tesla and SpaceX boss has floated for not finalizing the acquisition are freedom of speech concerns, the prevalence of bots on the platform, and the potential overturning of... Read more »

The post The Twitter ban problem appeared first on TechHQ.

]]>

Technology maverick (and world’s richest man) Elon Musk dominated headlines for a while recently with his will-he-won’t-he attempt to purchase social network Twitter. Among various factors the Tesla and SpaceX boss has floated for not finalizing the acquisition are freedom of speech concerns, the prevalence of bots on the platform, and the potential overturning of some of the most controversial Twitter account bans in recent memory.

The touted US$44 billion buyout has been in a holding pattern for the past two months as Musk, the Twitter board of directors, and the mass media have become active agents in the drawn-out saga – owing primarily to the technology sphere’s resident bad boy Musk pulling out one debatable tactic after another and leveraging his status as a headline-grabbing media darling (not to mention his own prodigious social media fanbase) to publicly call out what he considers as Twitter missteps. Among his criticisms of Twitter, its policies and its board are the handling of various data privacy and account misuse complaints, including some of the high profile bans of big voices on Twitter.

Ever since gaining more public notoriety in recent years than any celebrity technology demigod ought to have, Musk has often seemed deliberately to take the more controversial stance when it comes to matters not directly affiliated with one of his entities. Despite this role as a loudhailer for whatever his opinions might be on any day, on any subject, he’s still managed to make innovative businesses happen like Starlink, that connects swathes of rural America and other inaccessable parts of the world with affordable satellite-based internet coverage.

A couple of years ago, the consumer electric vehicle pioneer (underground enthusiast and amateur brain surgeon) delighted in manipulating his Twitter following, and resultantly the cryptocurrency markets, by posting incessantly in support of Dogecoin, despite that “currency” being known as a crypto ‘meme’ coin, meaning it has no intrinsic value and was created ‘for LOLs’, as its founders have readily admitted. Despite Musk’s claims that Doge could operate as a ‘store of value’ for funds, more likely his support was his way of showing just how influential he could be over large groups of people – especially cryptocurrency investors.

That media firestorm pales in comparison to his recent Twitter buyout dissonance, where among other things, Musk has publicly stated his intentions to allow firebrand public figures like former US president Donald Trump and other right-wing conspiracy theorists like Infowars’ Alex Jones back on Twitter, reversing their lifetime bans in favor of a “timeout,” AKA a temporary account suspension to cool their most inflammatory jets.

As grandstanding as many of these celebrity account bans can be on social media (including platforms Facebook and YouTube, which have their own role-call of apparently wronged individuals), many ordinary people get temporary account suspensions or outright bans on Twitter, on the regular. And many suspensions are not for an incitement to storm the nearest national legislature, or a claim that there are ‘weather weapons.’

As a matter of fact, the Twitter Rulebook is constantly evolving and there is no guarantee that what is considered Twitter canon today, will still be legal in 2023. Or even next month, as the service’s community guidelines are nebulous at best, prone to re configuring due to a host of plausible reasons, and of course, quite opaque to all but a few. And the best part: Twitter is under no obligation to inform affected users of rule changes, meaning you could be happily practicing what you know to be best practices from yesterday, but unannounced amendments to the fabled Rulebook can result in a suspension, anyway.

Many celebrity account bans on social media can be grandstanding, but many ordinary people get temporary account suspensions or outright bans on Twitter, on the regular

Shading Twitter. Twitter will yield to Elon Musk’s demand for internal data central to a standoff over his troubled US$44 billion bid to buy the platform, US media reported on June 8, 2022. (Photo by Amy Osborne / AFP)

Commonly affected are business users who harness the platform’s reach and proclivity to promote to engaged communities with an interest in their activities, products, or services. This can however backfire, as Twitter guidelines can change overnight. Case in point: old hand Twitter marketeers will no doubt be familiar with notices about ‘power sharing’ on the network, ie. the multiple posting and sharing of tweets to lengthen a post’s visibility. Sometimes, this is also accomplished by use of multiple accounts, so that interested parties might see the similar ‘viral’ content across accounts they follow. But as of at least 2021, the following actions have been prohibited, and the following violations notice is to be found in the Rulebook:

“The following behaviors are violations of the Twitter Rules: Creating serial and/or multiple accounts with overlapping use cases. Cross-posting Tweets or links across accounts. Aggressive following, particularly through automated means.

“As such, these accounts will remain suspended.”

So what was once the gold standard of content-sharing on Twitter became no longer the case, and can result in lengthy bans. Also disallowed is the posting of “duplicative or substantially similar Tweets on one account or over multiple accounts” operated by the same user. As of 2018, the use of the same hyperlink in varied tweets with different caption content can also land your tweets (and account) in muddy, automated regulatory waters. Inexplicably, accounts can also be suspended for the “aggressive” liking and retweeting of too many tweets at around the same time. This is presumably to prevent automated bots spamming posts with interactions – but the definition of “aggressive” interactions is not publicly known. To all intents and purposes, it is up to Twitter’s policy makers’ undisclosed personal preferences.

Alongside these frankly headscratch-inducing violations, there are also a series of legitimate reasons for Twitter account suspensions or bans, which most reasonable parties can agree on – even though they might infuriate and disrupt the regular social behavior of account holders who mistakenly get caught up in one of the platform’s regulatory policing sweeps. Some of these can be authenticity violations, also known as infringements caused by impersonating someone else, using someone else’s media (photos, bios), providing fake information, or violating the copyrights or trademarks of others. Any of these transgressions may result in a temporary ban, for which the user either has to appeal to Twitter via the offending account, or wait an indeterminate period for the ban to be lifted (if it is not a repeat offense).

Quite rightly, there are bans for endangering the safety of oneself or others, for using the platform to transmit threats, promoting self-harm or violence or child exploitation, or posting and sharing other types of adult content. And most recently, Twitter can now mete out lifetime bans for endangering the privacy of other users. Posts under this category could include sharing or threatening to share personal information of others without their consent; incentivizing other users to share third-party personal data by offering rewards and such; and leaking private media like photos and videos of others without their consent.

Not all suspensions are created equal – temporary bans can vary, but the offending account might keep some limited ability to DM followers, or to like/retweet/tweet in a limited capacity. These “read-only bans” can last anywhere from 12 hours to seven days, but they’re not the end of one’s Twitter experience for good. Permanent suspensions like the ones Trump and Jones have, are generally irreversible, barring the acquisition of the platform by an individual who believes in propagating extremists’ thoughts.

The problem for commercial marketers is one of dependence on a secondary platform for results. That has its risks, naturally, but those risks can be compounded by any attempt to “game” the system. In previous decades, stuffing websites with high-ranking keywords irrelevant to a site’s content was at first frowned upon and eventually punished by being ignored on SERPs. Twitter and YouTube users whose livelihood rests on their content being published on those platforms have to adhere to the rules but at all times be aware that without warning, their main publication channel(s) may shut them down.

The post The Twitter ban problem appeared first on TechHQ.

]]>
UK is probing Google’s online ad tech stack dominance, again https://techhq.com/2022/05/uk-is-probing-googles-online-ad-tech-stack-dominance-again/ Fri, 27 May 2022 13:10:13 +0000 http://dev.techhq.com/?p=215944

Britain on Thursday launched a second regulatory investigation into US tech giant Google’s dominance in online advertising technology, also known as ad tech. The Competition and Markets Authority (CMA) watchdog said in a statement that it will examine the group’s services that facilitate the sale of online advertising space between publishers and advertisers. The CMA will... Read more »

The post UK is probing Google’s online ad tech stack dominance, again appeared first on TechHQ.

]]>

Britain on Thursday launched a second regulatory investigation into US tech giant Google’s dominance in online advertising technology, also known as ad tech.

The Competition and Markets Authority (CMA) watchdog said in a statement that it will examine the group’s services that facilitate the sale of online advertising space between publishers and advertisers.

The CMA will look at three key parts of Google’s complex set of services known as advertising technology intermediation or the ad tech stack, and in each of which the US giant is a dominant player.

The tech titan has “strong positions” at various levels of the ad tech stack and charges fees to both publishers and advertisers, according to the CMA.

“We’re worried that Google may be using its position in ad tech to favor its own services to the detriment of its rivals, of its customers and ultimately of consumers,” added CMA chief executive Andrea Coscelli in the statement. “This would be bad for the millions of people who enjoy access to a wealth of free information online every day.”

The news comes just two months after both the European Union and Britain opened antitrust probes into a deal between Google and Facebook owner Meta that was allegedly aimed at cementing their dominance over online advertising.

The so-called “Jedi Blue” 2018 agreement has also faced lawsuits in the United States as global regulators seek to curb the power of big tech. Back in March, the European Commission said it was investigating the 2018 agreement that has also faced lawsuits in the United States as global regulators increase the pressure on their campaign to seriously limit the power of the so-called ‘Big Tech’ juggernauts. The EU said its probe would explore whether the arrangement between the internet behemoths had been used to “restrict and distort competition in the already concentrated ad tech market”.

The bloc’s competition supremo, Margrethe Vestager, said that if confirmed, the arrangement will have served to distort competition, squeezing rival ad tech companies, publishers “and ultimately consumers.”

At the same time as the EU probe in March, the UK’s Competition Market Authority also launched its own investigation into the agreement and the two watchdogs will “closely cooperate” on the investigation, the EU said. Chief Executive Andrea Coscelli said the CMA “will not shy away from scrutinizing the behavior of big tech firms […] working closely with global regulators to get the best outcomes possible.”

US court documents revealed that the top bosses of Google and Facebook were directly involved in approving the allegedly illegal 2018 deal. The legal documents filed in a New York court clearly refer to Sundar Pichai, chief of Google’s parent firm Alphabet, as well as Facebook executive Sheryl Sandberg and CEO Mark Zuckerberg — even if their names were redacted.

Google has further enraged publishers and online ad rivals with its plan to overhaul its ad tracking system on its world-leading Chrome browser and Android smartphone operating system. The internet giant made the move — which does away with personal online trackers that are known as “cookies” — to answer increasing pressure to better guarantee privacy for web users.

Critics see it as a way for Google to deny publishers and advertisers precious data and embolden the company’s dominance in advertising. The search giant’s parent Alphabet Inc pulled in over US$60 billion in the fourth quarter of 2021 just in ad revenue, which makes up over 80% of its income. Meta booked US$33.6 billion in sales in the same period, mostly from advertising.

 

 

 

 

 

 

 

 

The post UK is probing Google’s online ad tech stack dominance, again appeared first on TechHQ.

]]>
Why Firms Need to Push-On to Meet the Regulators’ Requirements https://techhq.com/2022/05/flexible-enterprise-compliance-regulations-solution-operational-resilience/ Thu, 19 May 2022 08:40:59 +0000 http://dev.techhq.com/?p=215830

The Operational Resilience software solution by Corporater stands as a regulatory compliance software that helps companies better prepare for the new policy on operational resilience, while strengthening the company against business disruptions.

The post Why Firms Need to Push-On to Meet the Regulators’ Requirements appeared first on TechHQ.

]]>

In March 2021, the PRA and FCA published policy statements on Operational Resilience which are effective from 31 March 2022, with a further three years for embedding up to 31 March 2025. The policies define Operational Resilience as “the ability of firms and the financial sector as a whole to prevent, adapt, respond to, recover and learn from operational disruptions and an outcome where there is an expectation on firms to be forward looking and making decisions today that help prevent harm tomorrow”.

Financial institutions have spent considerable sums identifying, and mapping Important Business Services to people, assets and business processes. But there is a view in some quarters that the necessary changes to systems have been enacted for the sake of adherence rather than committing to continual assessment and improvement.

David Bailey, Executive Director of the Bank of England in typically understated language warned that, “there is still distance to travel to a point where firms across the sector reach the level of operational resilience we expect to see.”

Early assessment of firms by the Bank of England have elicited mixed results. It found that firms had made ‘positive progress’ identifying Important Business Services. It was less impressed with the setting of Impact Tolerances. It found that some firms had identified an impact tolerance for customer harm or market integrity but did not include one for safety & soundness. An even higher number did not include an Impact Tolerance for financial stability. The Bank has warned firms that they will need to ‘justify their judgements.’ For mapping and testing activities, firms have relied on existing frameworks and tools. The Bank of England has warned that significant further work is required in the next three years for firms to embed fully-coherent mapping and testing frameworks.

Corporater

Source: Corporater

Current toolsets are not natively capable of continuous testing, showing where investment is required and where change needs to happen. Some firms are taking an approach that aims to repurpose existing tools such as business continuity, business intelligence and even Excel. Such an approach is likely to be suboptimal. It will increase the risk of non-compliance and will inevitably cost significantly more in the long term. In short, PS6/21 and 21/3 are not business continuity or monitoring stipulations in the “traditional” sense. The regulators’ requirements describe a continuous cycle of appraisal and re-appraisal of all business services and resources including estates, people-based processes as well as data-based systems that they are reliant on.

During the three year transition phase, the regulators have signaled that monitoring and enforcement activities will progressively tighten. According to BCS Accenture and ghost-factory.de, this will drive change in three areas: Firstly, the Important Business Services identified will not be static, they will change and evolve as the firm, the market and customers evolve. Secondly, operational resilience dashboards will need regular enhancement. Thirdly, incident reporting mechanisms and their alignment to Impact Tolerances will need to take into account the PRA’s ‘Operational Resilience Incident Reporting’ consultation paper. More generally, regulators underlined the need to embed a culture of continuous improvement and the frameworks and tools that can enable it.

When the policy statements were announced, Corporater decided that rather than expanding or repurposing existing solutions, it would build a market-ready Operational Resilience solution that was flexible enough to meet the needs of firms, now and in the future. It’s aimed at comprehensively supporting the advancement of firms’ operational resilience capability to 2025.

Corporater

Operational Screenshots. Source: Corporater

“We very much designed the solution with the policy statements in mind and at the heart of what we’ve done and therefore have approached it from a top-down perspective of focusing on meeting the requirements of what the policy statements are trying to address,” said Mark Limpkin, Corporater’s UK Head of Consulting.

Corporater’s solution is built to reflect a hierarchy, with important business services topmost, followed by the underlying processes and down to underlying resources. Important Business Services are rated against two concepts: Vulnerability (how likely it is that a business service will experience disruption based on underlying resource threats) and Recoverability (whether or not a business service can continue to operate inside the defined Impact Tolerances).

Corporater’s Operational Resilience solution enables firms to action across six steps (identify, map, assess, test, invest, and communicate). With this framework, companies can proactively comply with existing and new policies as they emerge — not because they have to, but because Operational Resilience is clearly imperative in the sector.

The post Why Firms Need to Push-On to Meet the Regulators’ Requirements appeared first on TechHQ.

]]>