Regulations - TechHQ Technology and business Mon, 04 Mar 2024 20:56:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Spotify, Epic decry Apple terms under EU compliance https://techhq.com/2024/03/open-letter-to-apple-from-spotify-and-epic-on-terms-and-conditions/ Tue, 05 Mar 2024 09:30:12 +0000 https://techhq.com/?p=232501

Spotify among companies complaining about Apple EU developer terms & conditions. Anti-competitive practices make sideloading more expensive. Software companies likely to keep working under existing Apple terms & conditions. With iOS 17.4 due to be released in the coming week, 30 companies have penned an open letter to the European Commission, media groups, and lobby... Read more »

The post Spotify, Epic decry Apple terms under EU compliance appeared first on TechHQ.

]]>
  • Spotify among companies complaining about Apple EU developer terms & conditions.
  • Anti-competitive practices make sideloading more expensive.
  • Software companies likely to keep working under existing Apple terms & conditions.

With iOS 17.4 due to be released in the coming week, 30 companies have penned an open letter to the European Commission, media groups, and lobby organizations, stating their concerns about Apple’s terms and conditions, which they claim will still leave the company in contravention of the EU’s Digital Markets Act.

To comply with the DMA, Apple is now allowing third-party app stores and the sideloading of applications downloaded independently. Developers will be given a choice between signing up to Apple’s new terms or sticking with the existing T&Cs, which the group claims is a “false choice.” The new terms, the signatories claim, will “hamper fair competition with potential alternative payment providers.”

Rotten apple terms and conditions illustrative image.

“Rotten apples” by fotologic is licensed under CC BY 2.0.

To aid developers in their choice, Apple provides a handy calculator to guide them through the myriad available options. Users in the EU select whether they will qualify for the App Store Small Business Program, what App Store fees they would pay, and the value of in-app purchases they predict users will pay – under new and old terms.

What will surprise absolutely no one is that developers will end up paying more money to Apple if they choose to allow their apps to be sideloaded than they currently pay under existing terms. They will also have the cost of running an app store, a customer support function, and a payment processor. For developers, keeping business as usual under Apple’s existing terms results in greater revenue. The only way to preserve income under Apple’s new terms with apps served from a third-party store is to raise the price that consumers pay.

This puts some of the more hyperbolic language of the open letter to the European Commission into context. It claims that “Apple is rendering the DMA‘s goals of offering more choice and more control to consumers useless.” Consumers will rarely have a choice to sideload an app or download it from a third-party store because no application developers will opt to make less money.

The letter states:

“New app stores are critical to driving competition and choice both for app developers and consumers. Sideloading will give app developers a real choice between the Apple App Store or their own distribution channel and technology. Apple’s new terms do not allow for sideloading and make the installation and use of new app stores difficult, risky and financially unattractive for developers. Rather than creating healthy competition and new choices, Apple’s new terms will erect new barriers and reinforce Apple’s stronghold over the iPhone ecosystem.”

Apple’s new terms do “allow for sideloading” – in this, the letter is incorrect – but its terms are deliberately anti-competitive. The company is indeed “[making] a mockery of the DMA and the considerable efforts by the European Commission and EU institutions to make digital markets competitive.”

Apple terms and conditions illustrative imagery.

Something rotten in the state of Apple? Suuuurely not? “rotten apple” by johnwayne2006 is licensed under CC BY-NC-SA 2.0.

It would be naive to believe that the signatories of the letter are beating a drum for consumers’ right to choose where they source their apps from. The motives of Epic Games, Spotify, Uptodown, et al. are as mercenary and cynical as Apple’s. They expected to make more money thanks to the DMA‘s imposition but have been thwarted, at least for now. The ‘Apple Tax’ payed by companies with apps on the App Store is a thorn in the side to shareholders dependent on Apple’s App Store.

For the next few years, European taxpayers will fund the inevitable legal battle they will wage on behalf of the likes of Spotify (2023 Q4 revenue €3.7 billion, €68 million in adjusted operating profits) and Epic Games (valued at $31.5 billion in 2023), so justice can be granted to these stalwart defenders of consumer choice.

Under the Digital Markets Act, violators may be fined up to 10% of worldwide global turnover, which would amount to approximately $38 billion plus change. Likely for Apple, it won’t come to that, but as ever, Cupertino can afford its lawyers’ salaries for a few years until it can find ways to recoup the costs of operating in a competitive market – at least, in the EU. Developers and consumers in the US, UK, and elsewhere can look forward to business as usual.

Track available on both iTunes and Spotify…

The post Spotify, Epic decry Apple terms under EU compliance appeared first on TechHQ.

]]>
Gaming in China is so back, baby https://techhq.com/2024/01/china-gaming-regulation-might-not-happen-after-major-u-turn/ Thu, 25 Jan 2024 12:00:44 +0000 https://techhq.com/?p=231435

• Gaming in China to escape industry-killing restrictions? • Rules on gaming in China have been removed from  the website where their terms were listed. • But were they actually draconian – or a prescription for mental health among the gaming community? In recent years, China gaming crackdowns have been fodder for us-and-them discourse, the... Read more »

The post Gaming in China is so back, baby appeared first on TechHQ.

]]>

• Gaming in China to escape industry-killing restrictions?
• Rules on gaming in China have been removed from  the website where their terms were listed.
• But were they actually draconian – or a prescription for mental health among the gaming community?

In recent years, China gaming crackdowns have been fodder for us-and-them discourse, the idea of a regulating body deciding how long young people could game being almost as heinous (more, to some) than the one-child policy that ended in 2016.

Back in 2021, China’s video game regulator said online gamers under the age of 18 would only be allowed to play for one hour on Fridays, weekends, and holidays. Let’s face it, sometimes threatening a kid with legal action feels like the only way they’ll listen to you – especially when you tell them to turn off their devices.

In China, gaming was branded “spiritual opium” by media outlets, as rising concerns about the impact of excessive gaming on young people meant the government stepped in. As recently as December of last year, draft legislation was publicized that would limit the amount people could spend on video games.

Sure, gaming crackdowns hurt the young people they restrict, but have we considered the monetary loss to gamemakers? Please, God, won’t somebody think of the gamemakers?!

If online games stopped offering rewards for excessive play time and in-game spending, including those for daily logins, as the industry regulator stipulated, people would play less and – no, surely not! – spend less on gaming.

“The removal of these incentives is likely to reduce daily active users and in-app revenue and could eventually force publishers to fundamentally overhaul their game design and monetization strategies,” said Ivan Su, an analyst at Morningstar.

The biggest global gaming market is China, so any changes are high stakes.

But after a U-turn from China, gaming companies can breathe again. The National Press and Publication Administration (NPPA) has removed the December drafts from its site. The apparent change in opinion has seen share prices of Chinese gaming firms jump.

Shares in Tencent Holdings, the world’s biggest gaming company, and its closest rival NetEase, rose as much as 6% and 7% in morning trading respectively. In December, nearly $80bn was wiped from their values, so there’s definitely ground to recover.

All the same, there’s still uncertainty about the future of China’s gaming industry. Su said he thinks ambivalence around gaming will “probably last for quite some time, unless we get a very drastic turnaround in government rhetoric, or unless we get some super supportive policies.”

Is the gaming ecosystem in China about to flourish?

The link to the proposed legislation now opens an error page – possibly the first time in history users have been glad to see one.

The removal of the potential rules from the NPPA website was described by analysts as unusual. However, the consultation period on the rules ended on Monday, the day before their disappearance, suggesting a revision is in store.

After the market turmoil the initial proposal caused, the NPPA took a more conciliatory tone, and Feng Shixin was removed from his position as head of the publishing unit of the Communist Party’s Publicity Department, which oversees the NPPA.

The two most contentious articles of the China gaming crackdown were articles 17 and 18. Article 17 seeks to ban videogames from forcing players into combat, which is the key mechanic of the majority of contemporary multiplayer games.

Article 18 would require games to set a spending limit for players and bar features incentivizing in-game spending. Su “expects the government to remove Article 17 (prohibition of mandatory player-versus-player) and 18 (imposing spending limit) from the final rule,” he told Reuters.

It might not be a popular opinion, but maybe limiting the features of gaming that the Chinese restriction have done could be a good thing…

In the same way that Meta has faced lawsuits for knowingly enabling addictive features in its social media, perhaps it’s time to reconsider what “spiritual opium” we allow kids access to.

Okay, we aren’t sure the opium thing will stick, but the idea of reducing the money spent on virtual rewards and combat-focused gameplay doesn’t seem that outrageous if you separate it from the fact that it’s the Chinese government that was insisting on it, and look at lawsuits that are still live in the US.

Watch this gaming space.

Are the new restrictions now defunct?

The post Gaming in China is so back, baby appeared first on TechHQ.

]]>
Exploring the groundbreaking EU AI Act https://techhq.com/2023/12/what-will-the-eu-ai-act-say/ Tue, 12 Dec 2023 09:30:40 +0000 https://techhq.com/?p=230625

Transparency requirements have been introduced by the EU in the AI Act for developers of general-purpose AI systems like ChatGPT. It also bans unethical practices, such as indiscriminately scraping images from the internet for facial recognition databases.  Fines can range up to €35 million or 7% of global turnover, with the severity determined by the... Read more »

The post Exploring the groundbreaking EU AI Act appeared first on TechHQ.

]]>
  • Transparency requirements have been introduced by the EU in the AI Act for developers of general-purpose AI systems like ChatGPT.
  • It also bans unethical practices, such as indiscriminately scraping images from the internet for facial recognition databases. 
  • Fines can range up to €35 million or 7% of global turnover, with the severity determined by the nature of the infringement and the company’s size.

In a significant stride toward regulating the rapidly evolving field of AI, the European Union has recently achieved a milestone with the approval of the EU AI Act. This landmark legislation marks a defining moment for the region, setting the stage for comprehensive guidelines governing the development, deployment, and use of AI technologies.

It all started in April 2021, when the European Commission proposed an AI Act to establish harmonized technology rules across the EU. At that time, the draft law might have seemed fitting for the existing state of AI technology, but it took over two years for the European Parliament to approve the regulation. In those 24 months, the landscape of AI development has been far from idle. What the bloc did not see coming was the release and proliferation of OpenAI’s ChatGPT, showcasing the capability of generative AI–a subset of AI that was foreign to most of us. 

As more and more generative AI models entered the market following the dizzying success of ChatGPT, the initial draft of the AI Act took shape. Caught off guard by the explosive growth of these AI systems, European lawmakers faced the urgent task of determining how to regulate them under the proposed legislation. But the European Union is always ahead of the curve, especially when regulating the tech world.

So, following a long and arduous period of amendment and negotiation, the European Parliament and the bloc’s 27 member countries finally overcame significant differences on controversial points, including generative AI and police use of face recognition surveillance, to sign a tentative political agreement for the AI Act last week.

“Deal!” tweeted European Commissioner Thierry Breton just before midnight. “The EU becomes the first continent to set clear rules for using AI.” This outcome followed extensive closed-door negotiations between the European Commission, European Council, and European Parliament throughout the week, ending after three days of rigorous negotiations spanning thirty-six hours.

The EU AI Act - the first of many? Source: Thierry Breton on X

Yes, but how good is it? Source: Thierry Breton on X

“Parliament and Council negotiators reached a provisional agreement on the AI Act on Friday. This regulation aims to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI while boosting innovation and making Europe a leader. The rules establish obligations for AI based on its potential risks and level of impact,” the European Parliament said.

The AI Act was initially crafted to address the risks associated with specific AI functions, categorized by their risk level from low to unacceptable. However, legislators advocated for its extension to include foundation models—the advanced systems that form the backbone of general-purpose AI services, such as ChatGPT and Google’s Bard chatbot.

Dragoș Tudorache, a member of the European Parliament who has spent four years drafting AI legislation, said the AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for citizens and democracies against any abuses of technology by public authorities. 

“It protects our SMEs, strengthens our capacity to innovate and lead in AI, and protects vulnerable sectors of our economy. The EU has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future,” he added.

What makes up the EU AI Act?

While the EU AI Act is notable as the first of its kind, its comprehensive details remain undisclosed. A public version of the AI Act is not expected for several weeks, making it challenging to provide a definitive assessment of its scope and implications unless leaked. 

Members of the European Parliament take part in a voting session during a plenary session at the European Parliament in Strasbourg, eastern France, on November 22 , 2023. (Photo by FREDERICK FLORIN / AFP).

Members of the European Parliament take part in a voting session during a plenary session at the European Parliament in Strasbourg, eastern France, on November 22 , 2023. (Photo by FREDERICK FLORIN / AFP).

Policymakers in the EU embraced a “risk-based approach” to the AI Act, focusing intense oversight on specific applications. For instance, companies developing AI tools with high potential for harm, especially in areas like hiring and education, must furnish regulators with risk assessments, data used for training, and assurances against damage, including avoiding perpetuating racial biases. 

The creation and deployment of such systems would require human oversight. Additionally, specific practices, like indiscriminate image scraping for facial recognition databases, would be banned outright. But, as stated by EU officials and earlier versions of the law, chatbots and software producing manipulated images, including “deepfakes,” must explicitly disclose their AI origin. 

Law enforcement and governments ‘ use of facial recognition software would be limited, with specific safety and national security exemptions. The AI Act also prohibits biometric scanning that categorizes people by sensitive characteristics, such as political or religious beliefs, sexual orientation, or race. “Officials said this was one of the most difficult and sensitive issues in the talks,” a report by Bloomberg reads.

While the Parliament advocated for a complete ban last spring, EU countries lobbied for national security and law enforcement exceptions. Ultimately, the parties reached a compromise, agreeing to restrict the use of the technology in public spaces but implementing additional safeguards.

The suggested legislation entails financial penalties for companies breaching the rules, with fines ranging up to €35 million or 7% of global turnover. The severity of the penalty would be contingent on the nature of the violation and the size of the company. While civil servants will finalize some specifics in the coming weeks, negotiators have broadly agreed on introducing regulations for generative AI.

Some 85% of the technical wording in the bill has already been agreed on, according to Carme Artigas, AI and Digitalization Minister for Spain (which currently holds the rotating EU presidency).

So far, EU lawmakers have been determined to reach an agreement on the AI Act this year, in part to drive home the message that the EU leads on AI regulation, especially after the US unveiled an executive order on AI and the UK hosted the international AI Safety Summit—China also developed its own AI principles

However, next year’s European elections in June are also quickly closing the window of opportunity to finalize the Act under this Parliament. Despite these challenges, the EU’s success in finalizing the first comprehensive regulatory framework on AI is impressive.

 

The post Exploring the groundbreaking EU AI Act appeared first on TechHQ.

]]>
Asian countries hold big tech companies accountable https://techhq.com/2023/11/why-are-big-tech-companies-suddenly-being-held-accountable/ Mon, 27 Nov 2023 09:30:40 +0000 https://techhq.com/?p=230098

• Big tech companies are suddenly being held accountable for their influence and impact around the world. • Meta is even being sued by over 30 Attorneys-General in the US. • Tighter regulations will have to be complied with in Asia It’s not been a great week for big tech companies. In the US, Apple... Read more »

The post Asian countries hold big tech companies accountable appeared first on TechHQ.

]]>

• Big tech companies are suddenly being held accountable for their influence and impact around the world.
• Meta is even being sued by over 30 Attorneys-General in the US.
• Tighter regulations will have to be complied with in Asia

It’s not been a great week for big tech companies. In the US, Apple is in hot water due to concerns that the Chinese government is influencing the company’s program commissioning, and Elon Musk’s X has failed to pay a fine imposed by Australian regulators. Sam Altman has been fired, hired, and rehired again, leaving a gaping insecurity in OpenAI’s investor confidence.

Across the world, too, regulators and governments are cracking down on big tech companies and emphasizing the need for compliance.

New legal powers in the European Union have changed the landscape for Apple-Android messaging, and in the US, new digital payments regulations have been proposed. Now, Asian countries are paying attention – and are increasingly concerned by the dominance of just a few tech companies.

Across the region it’s indicated that big tech companies will have to comply with standards that might soon be international benchmarks. Let’s have a look at some specifics.

Japan

In June, a Japanese government panel led by Chief Cabinet Secretary Hirokazu Matsuno proposed regulations that would open up the Apple and Google app stores to competition, as well as stopping a bias toward the companies’ respective smartphone operating systems.

Chief Cabinet Secretary Hirokazu Matsuno lets the little guys compete with big tech companies.

Chief Cabinet Secretary Hirokazu Matsuno lets the little guys compete with big tech companies.

The panel said that the bias is caused by the companies making it difficult to download apps or use payment services developed by third parties – both Apple and Google charge commissions on app payments.

In October, Japan’s Fair Trade Commission (FTC) revealed it had begun an investigation into Google blocking rival services and a potential breach of antitrust regulations.

The Justice Ministry has made another push for foreign technology companies to register their overseas headquarters in the country to extend government oversight in online harassment cases. That would enable victims to file lawsuits in Japan instead of having to go overseas.

Most social media companies have registered already, after a warning in 2022 that 48 foreign tech businesses were in breach of this rule. Google, Meta and X all registered their headquarters in the country.

FTC officials are on a recruitment drive to hire more lawyers to enhance scrutiny of big tech companies as Japan shows increasing concern that the domestic units of tech companies are undermining fair competition.

Nepal

Nepalese government officials announced the country would ban TikTok on November 13th. The move comes in response to TikTok’s negative impact on the country’s “social harmony” and how it “disrupts family structures and social relations.”

It’s been criticized by some as blocking freedom of expression and a group of journalists and nonprofits released a statement saying the ban would limit speech and the opportunity for Nepalis to participate in the global online community. Dozens have organized protests around the country.

There are around 2.2 million TikTok users in Nepal, said Sudhir Parajuli, president of the Internet Service Providers’ Association of Nepal. Many use the platform as a source of income.

Can bans on big tech companies badly influence smaller players down the chain? The experience of Nepal suggests they can.

Ashish Dangi holds a placard as he takes part in a protest against the ban on TikTok in Kathmandu, Nepal November 18, 2023. Via REUTERS/Navesh Chitrakar

Over 1,600 cybercrime cases, most of them related to TikTok, have been registered over the last four years in Nepal, according to local media reports.

The government also passed a directive this month requiring social media platforms to establish offices in the country as a means of broadening government oversight. The legislation further outlines a range of prohibited online actions – from spreading fake news to publishing photos of private affairs without permission. Embarrassing outbursts on public transport might be safe from internet trolls, but so will government officials in compromising positions.

Indonesia

Indonesia made headlines in late September as the first country to ban social media sales, bidding goodbye to TikTok Shop. The development was somewhat surprising, as TikTok’s e-commerce push was cited by experts as indication of the sector’s potential in the country.

Shou Zi Chew, TikTok's CEO, who suffered from the separation of TikTok Shop.

Shou Zi Chew, TikTok’s CEO.

TikTok chief executive Shou Zi Chew said in June that the company planned to invest billions of dollars in Southeast Asia, referencing Indonesia as a crucial part of the strategy.

Ah, how well that worked out.

Indonesian officials say that small businesses need protection from big tech companies, and banning e-commerce on social media will support offline businesses.

“Now, e-commerce cannot become social media. It is separated,” Trade Minister Zulfiki Hasan said.

Previously, President Joko Widodo had warned the country needed to be careful about e-commerce.

TikTok released a statement saying it was complying with the rules and would “no longer facilitate e-commerce transactions in TikTok Shop Indonesia … and will continue to cooperate with the relevant authorities on the path forward.”

India holding big tech companies accountable

In August, the Indian government rolled out a digital personal data protection bill that “seeks to better regulate big tech companies and penalize firms for data breaches,” placing more onerous requirements on big tech companies, like renegotiating contracts with local partners and overhauling data handling processes.

A number of companies had to request extensions of up to 18 months to comply with the rules. The law has further sparked concern about government overreach and a shrinking of digital freedoms in the country, particularly against a backdrop of media freedom concerns.

Another antitrust probe into Google began in May, focusing on its app payments.

Australia

Australia may not have been traditionally thought of as an “Asian” country, but it is by geography a member of the Asia-Pacific region (APAC), so geopolitically it counts in our round-up.

Big tech companies have run into tightening legistation.

Is TikTok looking for a way around the TikTok Shop separation?

In February, the social media site X – still called Twitter back then! – was instructed by Australia’s internet safety watchdog to better police child abuse material shared on the platform. The same was asked of YouTube, Google and TikTok, and financial penalties were threatened.

In October, a report by the eSafety commissioner singled X out for allowing child abuse material to proliferate across the platform. It was issued a fine by eSafety on the day of the report’s publication because of its failure to detail plans to get rid of child abuse content on the site or be proactive in tackling the issue.

“Twitter/X has not paid the infringement notice within the allotted time frame and eSafety is now considering further steps,” a spokesperson for the regulator said.

There was a new logo to be designed, though! The social media site was in the middle of a rebrand – pesky regulators should understand that.

Big tech companies have for years acted as though they were above the law, and as thought massive regulatory fines were merely the cost of doing multi-million dollar business. If they are now to be faced with stronger regulatory curbs on their behavior, it’s hard to find many people who will cry for them. The only issue, as in the case of Nepal and TikTok Shop, will arise when big tech companies use vendors further down the food chain as a kind of human shield to prevent regulations applying to them.

Governments like those in Indonesia though have shown they give very few clicks about such tactics, and have done what they feel is for the overall betterment of their society, rather than bending to the special pleading of the big tech companies.

The post Asian countries hold big tech companies accountable appeared first on TechHQ.

]]>
Biden executive order brings us closer to regulations on AI https://techhq.com/2023/11/does-the-biden-executive-order-bring-us-closer-to-regulations-on-ai/ Wed, 01 Nov 2023 14:00:51 +0000 https://techhq.com/?p=229456

• Regulations on AI have been demanded since the second after ChatGPT arrived. • The new Biden executive order gets us closer than we’ve ever been to a framework of AI law. • It tackles eight major areas of concern with generative AI. Since generative AI became a reality in October, 2022, thanks to OpenAI... Read more »

The post Biden executive order brings us closer to regulations on AI appeared first on TechHQ.

]]>

• Regulations on AI have been demanded since the second after ChatGPT arrived.
• The new Biden executive order gets us closer than we’ve ever been to a framework of AI law.
• It tackles eight major areas of concern with generative AI.

Since generative AI became a reality in October, 2022, thanks to OpenAI and its Microsoft-backed chatbot, ChatGPT, people, organizations and governments around the world have been calling for regulations on AI.

Potential dangers of the technology have run the gamut from the standard sci-fi “Algorithmic overlords will kill us all and/or destroy the planet,” through the significantly more likely “the technology will put whole armies of people out of work,” to the most likely of all, “It’s going to have exploitation of workforces, misogyny, bigotry and all the other unfairnesses of our society baked right in and normalized.”

There have been entirely legitimate concerns on the nature, quality and bias of the data on which large language models are trained, and equally legitimate worries that, given the startlingly rapid adoption of generative AI across the business community of the world, any regulations on AI would either come too late to be effective, or be too broad to do any good.

The Biden executive order comes into being, proposing sweeping additions to regulations on AI.

Will the executive order be effective?

The European Union was first out of the gate in terms of developing regulations on AI, and while its provisions in the EU AI Act are a brave stab at delivering guardrails on AI technology, they were begun in the era before generative AI, and so while they deal deal comprehensively with pre-generative technology, their regulations on generative AI are something of a blunt instrument.

While there’s no legal framework in the world where those who are regulated get to say how far the regulations should go, OpenAI’s Sam Altman felt free to add that he felt the European approach was “overregulation,” and went on a slightly desperate last minute European tour before the provisions of the Act were made public, to get them amended.

Without regulations on AI, we’re doooooomed!

Speaking of Altman, he’s previously spoken to the likes of Senate subcommittees about what he believes – or at least is eager to make it appear that he believes – are the dangers of the technology which has made his name and fortune, up to and including complete human extinction.

While it’s worth noting that in the wake of that testimony, he floated a security technology which could allegedly keep user data safe even from increasingly sophisticated generative AI, the like of which he was also keen to develop, Altman’s been back in the headlines just this week.

While he continues to acknowledge the feasibility of some of the wilder disaster-claims for generative AI (and the fact that we’re still in the very early days of the technology’s use, despite its wild breadth of uptake and application), Altman says – probably with the most open honesty of any of his recent statements – that there’s no putting the genie of AI back in its bottle, so he wants regulations around AI that make it safe from use by bad actors, without unfairly penalizing those who are trying to use the technology to advance humanity’s capabilities.

Are regulations on AI really needed?

“You crazy kids keep it down! If I have to come in there, there’ll be trouble…”

Which brings us to the Biden administration’s executive order.

While the White House had informal talks with some of the leading players in generative AI earlier in the year, and Democratic Senator Chuck Schumer has done some work on establishing initial guidelines on the technology, the new executive order is the most significant step the US government has so far taken towards a set of regulations on AI.

There are eight fundamental principles to the executive order:

  • Standards for safety and security
  • Protecting citizen privacy
  • Advancing equity and civil rights
  • Protecting consumers, patients and students
  • Supporting workers
  • Promoting innovation and competition
  • Advancing American leadership abroad
  • Ensuring responsible and effective government use of AI

The fundamental principles are both modern in terms of the technologies to which they apply, and distinctly Bidenesque – surfacing from under the radar with little by way of advanced warning, heady with pragmatism and drenched in American motherhood and apple pie.

But there’s no denying that they also touch on many of the main concerns that have been raised with the application and use of generative AI so far.

Sam Altman of OpenAI.

“First word? Starts with D? Democtratic oversight?!” Sam Altman of OpenAI.

Breaking down the Biden order.

On safety and security: the order requires makers of powerful AI systems to share their safety test results with the US government, and instructs the National Institute of Standards and Technology (NIST) to set rigorous standards for red-team testing of the safety of such systems before they’re allowed to be released for public use.

In addition, it provides for the establishment of an advanced cybersecurity program, to find and fix vulnerabilities in critical software, and establishes a National Security Memorandum to direct further actions on AI and security, so that the US military and intelligence community are bound to use AI safely, ethically, and effectively in their missions.

On protecting privacy: the order commands Congress to pass bipartisan data privacy legislation to protect all Americans and their data. Such legislation on AI should include priority federal support for the development of privacy preserving techniques.

Such AI legislation should also develop guidelines for federal agencies, so they can assess the effectiveness of available techniques to preserve data and personal privacy in the age of generative AI.

On equity and civil rights: The likelihood of generative AI engraining social prejudices into the “way things work” has been shown time and time again. The order demands that developers address algorithmic bias, and pledges the development of best practice in critical use cases like the criminal justice system.

On consumer, patient and student protection: the order commits the government to advancing the responsible use of AI in healthcare, and to provide a system to report any issues that arise from the use of AI in a healthcare setting.

It also commits the government to developing supporting resources to allow educators to safely deploy AI in the classroom.

On supporting workers: This is one of the biggest issues, because one of the biggest fears the public has is that AI will put them out of work. The order’s response might, to some, feel a little wishy-washy – it pledges to develop best practices and principles to “address” job displacement, labor standards, workplace equity, health, and safety, and data collection.

It also commits the government to producing a report on the potential impact of AI on workplaces, and any necessary mitigation strategies as we shift from a largely human workforce to a mixed human-system workforce.

On promoting innovation and competition: the order is on firmer, if no more original ground. It will use the National AI Research Resource—a tool to provide AI researchers and students with access to key AI resources and data—and expand available grants for AI research in areas of national and international significance, like healthcare and climate change.

It will also promote the growth of a ground-up AI ecosystem by giving small developers and entrepreneurs access to technical assistance and resources, and helping small businesses commercialize AI breakthroughs. The idea behind that is not only to spread the general public’s knowledge and acceptance of generative AI, but also to ensure the technology doesn’t become a bottleneck technology of the extremely rich and the mortifyingly powerful.

On advancing American leadership abroad: possibly the most thoroughly Biden element of the order, it pledges that the US government will work bilaterally and multilaterally with stakeholders abroad to advance the development of AI and its capabilities.

Unless of course those stakeholders are Russian, Chinese, or presumably, given the latest state of the newest pre-global conflict in the world, Palestinian.

And on ensuring responsible and effective government use of AI: the order is on stronger ethical footing – it provides for the rapid access by agencies of appropriate AI technology, the development of appropriate agency guidance for the use of that technology, and the swift hiring in of expertise in such technology, so that the US government and its agencies can be as clued-up as they need to be in 2024 and beyond.

Of all the attempts so far to develop wide-ranging and effective regulations on AI, the Biden executive order is by far the most comprehensive.

How much of the order sees the long-term light of day, is taken up as a set of guiding principles internationally, or at this point even survives the 2024 presidential election, remains to be seen.

The post Biden executive order brings us closer to regulations on AI appeared first on TechHQ.

]]>
New Text Messaging Regulations Impacting SaaS: From compliance to deliverability https://techhq.com/2023/08/new-text-messaging-regulations-impacting-saas-from-compliance-to-deliverability/ Tue, 01 Aug 2023 15:58:00 +0000 https://techhq.com/?p=226806

If you’re running SMS & MMS text messaging for your customers, there are new mandatory business text message requirements to be aware of that will affect your text messaging deliverability, revenue and customer retention if not addressed. As of this year, in an effort to stop spam messaging, all major messaging API providers of business... Read more »

The post New Text Messaging Regulations Impacting SaaS: From compliance to deliverability appeared first on TechHQ.

]]>

If you’re running SMS & MMS text messaging for your customers, there are new mandatory business text message requirements to be aware of that will affect your text messaging deliverability, revenue and customer retention if not addressed.

As of this year, in an effort to stop spam messaging, all major messaging API providers of business text messaging, (application-to-person (A2P) messaging), now require their customers to register their local numbers or 10-digit long code (10DLC) with The Campaign Registry (TCR). 

TCR was created to validate businesses and their messaging campaigns to protect mobile users from scammers and reduce the risk of spam. In late 2020, it was launched in conjunction with the Carrier’s A2P 10DLC rollout to oversee the registration of 10DLC numbers and associate them with specific campaigns.

Registration involves businesses being vetted, agreeing to compliance requirements, and enabling message filtering mechanisms to identify and block spam. Certain carriers, like T-Mobile, AT&T, and Verizon, already implement even higher pass-through fees – up to $0.01 for SMS – on unregistered messages sent to encourage businesses to comply.

The challenge with new registration deadlines

TCR registration is now mandatory industry-wide. However, each provider has set its own deadline, and any unregistered SMS and MMS messages sent after those have passed will be completely blocked. Twilio – one of the largest messaging API companies – has announced a complete cutoff of unregistered 10DLC messages by August 31 this year. 

The deadlines have led to businesses rushing to register their 10DLC numbers at once. This has created bottlenecks and huge wait times of eight weeks or more. 

Why does this pose such a challenge? Because eight-week delays would put businesses past the deadline, causing their messages to be blocked, leading to massive loss of revenue and customer churn if they are unable to send messages even for a day. 

How to avoid registration delays

Telgorithm, Inc stands out as a provider which foresaw the challenges in the business text messaging industry and addressed them proactively. 

The company adopted 10DLC messaging early, so it has been working on making the registration process as quick and painless as possible by introducing automated registration. Telgorithm has integrated its APIs with TCR’s, so that when its customers register as Campaign Service Providers (CSPs) with TCR, the information inputted gets sent directly to Telgorithm. It is then automatically imported into the customer’s Telgorithm account, so they do not need to fill it in twice. The automation significantly reduces the risk of typos that can lead to Brand and Campaign rejections, and speeds up the registration process to just a matter of days.

Additionally, after you submit for TCR registration and become a CSP, your Campaigns must be vetted by your provider’s Direct Connect Aggregator (DCA). This vetting process is manual and currently takes between four to eight weeks to complete, causing messages to remain unregistered until approved. That means higher Carrier pass-through fees and possible disruptions if a Campaign is not vetted and approved by the registration deadline. But, thanks to Telgorithm’s trusted relationships with the DCAs, the current wait time for its customers to get Campaigns fully approved and compliant is five to seven days. 

Once you are a CSP with TCR, you can demo Telgorithm’s Campaign vetting approval speed for yourself. This requires zero coding or contract to test. 

The faster a company gets registered, the faster it can be confident its messages won’t be blocked. It can also start to take advantage of the benefits that come with registration; lower carrier pass-through fees and less filtering. However, the most significant benefit is higher messaging throughput or rate limits, increasing the number of text messages a Campaign is able to send within a specific time. 

Rate limits were established to both prevent the Carriers’ systems from being overwhelmed or bottlenecked, and protect the end user from unsolicited messages and/or spam. It’s important to know that each Carrier monitors rate limits differently. For example, Verizon monitors by messages sent per number per minute, while AT&T monitors by messages sent per Campaign per minute. T-Mobile, on the other hand, has a daily cap, monitoring messages sent per day, inclusive of all Campaigns that sit under the given Brand.

The biggest takeaway is that if your Campaign sends more messages than allotted by each Carrier – the message will be blocked and therefore not received by the end user. 

The mission-critical importance of keeping track of your rate limits

New requirements mean we need to be thinking about managing messaging differently. Although the increased rate limit may appear to be a sure victory, being able to leverage it after registration is not guaranteed. A provider must have new, dedicated technology in place to properly manage customer campaigns and keep track of their individual rate limits, as well as the number of messages going to each Carrier. 

Not being aware of these for each of your customers’ Brands and Campaigns leads to either unnecessary message capping, or exceeding rate limits, meaning those texts will be dropped or blocked. This contributes to a negative customer experience due to poor deliverability and, ultimately, stumps business profitability.

How to take control of rate limits

Telgorithm, founded by telecom industry experts who have been the customers of API providers in the past, is equipped with essential tech to ensure deliverability.      

A2P messaging

Source: Telgorithm

Along with an overall ease of use due to automation, Telgorithm has built a suite of tools that achieves an average of 98.7% text message deliverability for its customers. One of these is Smart Queueing which automatically tracks and manages your approved TCR and Carrier rate limits so you can send at the fastest speeds without ever exceeding your limits; any additional messages are queued up to send at the next available opportunity. 

Working in tandem with Smart Queueing, Message Prioritization enables you to proactively program your most urgent messages to be delivered first so that they’re not sitting in the queue. 

Next, Time Routing allows you to strategically determine what happens with the messages that are queued. You can rescind the messages from the queue that no longer need to be sent, or schedule them to be sent during a specific time period to give the end user the best possible experience.

Telgorithm customers can therefore optimize their rate limits, reduce expenses, and enhance the overall customer experience. Other features include Number Verification, which checks that numbers are real and in service before a message is sent, saving on costs and also helps guarantee deliverability. 

Why Telgorithm?

Telgorithm is a next-gen A2P text messaging API provider and the only one to automate the entire SMS & MMS journey from compliance to deliverability, enabling SaaS businesses to grow revenue and improve customer experience. Founded by a team of telecom experts that anticipated the new era of business text messaging, it built tech to support businesses through the industry-required changes and continues to enable their success on the pathway after registration.

With the upcoming deadlines marking a new era for A2P messaging, Telgorithm wants to help your business adapt. It can guide you through 10DLC registration and ensure it goes as quickly and smoothly as possible with automated processes. After that, it will take control of your customers’ rate limits for you, ensuring they are never capped or exceeded. Overall, its suite of dedicated API tools achieves 99% message deliverability for your customers. 

Telgorithm’s mission is to offer a more reliable and transparent business text messaging service that prioritizes helping its customers understand the latest industry changes which impact them. At the same time, it doesn’t expect anyone else to be experts, so it offers direct lines of communication to support teams and engineers, allowing customers to focus on what’s important to their business. Telgorithm is the future of A2P text messaging for any SaaS business offering SMS & MMS messaging to their customers today. 

To learn more about how Telgorithm can help you improve your message deliverability and overall experience for your customers to grow revenue, visit its website.

The post New Text Messaging Regulations Impacting SaaS: From compliance to deliverability appeared first on TechHQ.

]]>