generative AI - TechHQ Technology and business Wed, 06 Mar 2024 20:51:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Hugging Face Safetensors vulnerable to supply chain attacks https://techhq.com/2024/03/hugging-face-safetensors-vulnerable-to-supply-chain-attacks/ Thu, 07 Mar 2024 09:30:55 +0000 https://techhq.com/?p=232569

• Hugging Face vulnerabilities revealed. • Supply chain attacks can get into Hugging Face safetensors. • That means the whole Hugging Face community could be under threat. Recent research has found that the new Hugging Face Safetensors conversion services are vulnerable to supply chain attacks, with hackers able to hijack AI models submitted by users.... Read more »

The post Hugging Face Safetensors vulnerable to supply chain attacks appeared first on TechHQ.

]]>

• Hugging Face vulnerabilities revealed.
• Supply chain attacks can get into Hugging Face safetensors.
• That means the whole Hugging Face community could be under threat.

Recent research has found that the new Hugging Face Safetensors conversion services are vulnerable to supply chain attacks, with hackers able to hijack AI models submitted by users. Reported by The Hacker News, cybersecurity researchers from HiddenLayer discovered that it is “possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform.” The researchers also found that it was possible to “hijack any models that are submitted through the conversion service.”

For those who don’t know, Hugging Face is a collaboration platform used by software developers to host and work together on an infinite number of datasets, machine learning models, and applications, all of which are pre-trained. Users can build, implement, and train these to their choosing.

Vulnerabilities in Hugging Face

Safetensors, a format designed by Hugging Face, store tensors, prioritizing security. Users can also convert PyTorch models to Safetensor through a pull request if desired. Safetensors is in contrast to “pickles,” another format, which may have been exploited by malicious actors to deploy tools such as Mythic and Cobalt Strikes, and run unauthorized code.

The recent revelation of possible vulnerabilities comes as a shock to many of Hugging Face’s 1.2 million registered users. It became evident through the research that malicious pull requests could be accomplished via a hijacked model. Since the service should convert this model, it enables harmful actors to pose as the conversion bot and request modifications to any repository on the platform.

It’s also possible for hackers to extract tokens associated with SFConvertbot. This is a bot made to generate a pull request. These tokens can be extracted, sending out a dangerous pull request to any repository on the Hugging Face site. From here, a threat actor could manipulate the model, even implanting neural backdoors.

According to researchers, “an attacker could run any arbitrary code any time someone attempted to convert their model.” Essentially, a model could be hijacked upon conversion without the user even knowing it.

An attack could result in the theft of a user’s Hugging Face token if they try to convert their personal repository. Hackers may also be able to access datasets and internal models, resulting in malicious interference.

The complexities of these vulnerabilities don’t stop there. An adversary could exploit the ability for any users to submit a conversion request for a public repository, resulting in a possible modification or hijacking of a widely utilized model. That poses a substantial risk to the overall supply chain. Researchers summed this up by saying, “the conversion service has proven to be vulnerable and has had the potential to cause a widespread supply chain attack via the Hugging Face official service.”

Attackers could get access to a container that runs the service, and choose to compromise any models that have been converted by it.

Hugging Face - traditionally, bad things happen afterwards...

Hugging Face – traditionally, bad things happen afterwards…

The implications go beyond singular repositories. The overall trustworthiness and reliability of the Hugging Face service and its community is under threat.

Co-founder and CEO of Hidden Layer, Chris “Tito” Sestito, emphasized the effects this vulnerability could have on a wider scale, saying, “This vulnerability extends beyond any single company hosting a model. The compromise of the conversion service has the potential to rapidly affect the millions of users who rely on these models to kick-start their AI projects, creating a full supply chain issue. Users of the Hugging Face platform place trust not only in the models hosted there but also in the reputable companies behind them, such as Google and Microsoft, making them all the more susceptible to this type of attack.”

LeftoverLocals

Hidden Layer’s exposure to certain vulnerabilities comes just one month after Trail of Bits revealed a vulnerability known as LeftoverLocals (CVE-2023-4969, Common Vulnerability Scoring System (CVSS) score – 6.5). This particular security flaw enables the retrieval of data from general-purpose graphics processing units (GPGPUs), manufactured by Apple, AMD, Qualcomm, and Imagination. The CVSS score of 6.5 indicates that this vulnerability was on a moderate level of severity, putting sensitive data at risk.

Trail of Bits’ memory leak stemmed from a failure to isolate process memory. Therefore, a local attacker could gain access and read memory from various processes. This includes the interactive sessions of other users within a Large Language Model (LLM).

The Hugging Face vulnerabilities, as well as those at Trail of Bits, only emphasizes the need for AI technologies to have stricter security protocols in place. Currently, the adoption of AI is growing at such a rate that sufficient security measures cannot keep up. HiddenLayer is one company that is creating solutions for such shortcomings, with its AISec platform offering a range of products designed to protect ML models against malicious code injections and attacks.

Nevertheless, the revelation of Hugging Face’s Safetensors conversion tool issues gives us a stark reminder of the challenges faced by AI and machine learning sectors. Supply chain attacks could put the integrity of AI models at risk, as well as the ecosystems that rely on such technologies. Right now, investigations are continuing into the vulnerability, with the machine learning community on high alert, and more vigilant than ever before.

The post Hugging Face Safetensors vulnerable to supply chain attacks appeared first on TechHQ.

]]>
Inkitt: what happens when AI eats its own words? https://techhq.com/2024/03/ai-will-help-writers-create-literally-average-stories/ Mon, 04 Mar 2024 09:30:39 +0000 https://techhq.com/?p=232469

Inkitt AI help for writers shows successful patterns. Success delivered by what are proven to be winning formulae. We look forward to Fast & Furious 52‘s release in 2066. The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that... Read more »

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
  • Inkitt AI help for writers shows successful patterns.
  • Success delivered by what are proven to be winning formulae.
  • We look forward to Fast & Furious 52‘s release in 2066.

The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that work on behalf of its authors. The AI will guide submissions to the eponymous app in areas such as readability, plot, and characterization.

Self-publishing is hugely popular among authors. It circumvents the often-frustrating processes of finding an agent, receiving rejections from established publishing houses, and lessening any income from a work thanks to parties in the chain who each take a cut of revenues generated by sales. An AI-powered virtual assistant can help authors with advice and offer changes to a text that are drawn from previously successful stories.

Inkitt’s AI amalgamates the output from several large language models to find trends in the enormous body of previously published books, giving writers help to align their work with already successful and popular works. At first sight, its approach is clearly more appropriate than having ‘authors’ simply use an AI to create words for a book. It’s also a step above once-respected news outlets using AI to write stories. But a deeper understanding of how large language models work informs us that the boundaries of creativity possible with AI are claustrophobic.

AI help for writers illustration

“Cuba book” by @Doug88888 is licensed under CC BY-NC-SA 2.0.

Whether in video, visual art, game design, or text, machine learning algorithms are educated on extant publications. Over the period of the learning phase, they process large quantities of data, and learn patterns that can then be used to reproduce material similar to that in the body of learning data.

In the case of a novel or screenplay’s structure, then, what’s succeeded in the past (in terms of popularity and, often, revenue generated) can be teased out from the also-rans. It’s a process that is as old as creativity itself, albeit a habit that’s formed without digital algorithmic help. Hollywood industry experts can produce lists of formulae that work for the plot, the rhythm of narrative flow, characterization, and so on. Such lists, whether ephemeral or real, inform the commissioning and acceptance of new works that will have the best chance to succeed.

The threat to creativity from the models used in ways like that proposed by Inkitt is twofold. The most obvious is one of the repetition of successful formulae. This means, depending on your choice of language, works that are on-trend, derivative, zeitgeisty, or repetitious.

The second threat comes from the probability curves embedded into the AI code. The degree of exception from the average of any creative work chewed up by an algorithm will always be diminished. What can’t be judged particularly easily is what makes something an exception and whether it’s different from the average because it’s badly created or because it’s superbly created. Truly fantastic creative works may be given a lesser weight because they don’t conform to a number of other factors, like sentence length or a color palette that is (currently) familiar.

The effect is one of standardization and averaging across the gamut of creative output so that a product is successfully conformist to the mean. Standardization equals conforming, which equals success. But standardization leads inexorably to stagnation.

In practical uses of AI today, many of the traits and methods of models are perfect for their designed purpose. Data analytics of spending patterns informs vendors’ choices for new product development based on what sells well. Outliers and exceptions have little importance and are rightly ignored by the model’s probability curve.

But in areas of creating new art, product design, music composition, or text creation, the exceptions can have value, a value that is increased by not conforming to average patterns of success, readability, aesthetic attractiveness, characterization, or one of a thousand other variables at play. If conformity to guidelines means success, then how we define success is the interesting question. History is littered with composers, artists and writers who didn’t conform, and were succesful during their lifetimes or posthumously. Plenty too who were succesful conformists. And many who kicked against prevailing strictures and got nowhere, dying in poverty.

Will AI be able to give help to writers?

“book” by VV Nincic is licensed under CC BY 2.0.

So what help can AI actually deliver for writers? As in many areas of life and business, it can work well as a tool, but it cannot – or at least should not – be allowed to dictate the creative elements of art.

By reducing creativity to an algorithmically generated idea of “what works,” talent that’s non-conforming is immediately stymied. It depends, of course, on what the creator’s desired outcome is, or how they deem themselves to be succeful. If they want a greater chance of achieving mainstream popularity, then the Inkitt AI will help guide them in what to change to better fit into the milieu. Many dream of being the scriptwriter or 3D visual designer for the next movie blockbuster, and there is value in that. Inkitt may make people better writers, but it’s the individual’s idea of what a ‘better’ writer is that will inform their decision whether or not to sign up.

Individual human voices can make great creative works. But by placing those works inside a mass of mediocrity (and worse) and teaching an algorithm to imitate the mean, what’s produced is only ever, at best, going to be slightly better than average. As more content is created by AI and it too becomes part of the learning corpora of machine learning algorithms, AIs will become self-replicating, but not in the manner of dystopian sci-fi. Much of the future’s published content will just be very, very dull.

Oatmeal for breakfast, lunch, and dinner.

Amalgamating for the sellable mean turns tears of human creativity into nothing more than raindrops in the flood.

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
Oh, Air Canada! Airline pays out after AI accident https://techhq.com/2024/02/air-canada-refund-for-customer-who-used-chatbot/ Wed, 21 Feb 2024 09:30:24 +0000 https://techhq.com/?p=232218

Ruling says Air Canada must refund customer who acted on information provided by chatbot. The airline’s chatbot isn’t available on the website anymore. The case raises the question of autonomous AI action – and who (or what) is responsible for those actions. The AI debate rages on, as debates in tech are wont to do.... Read more »

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
  • Ruling says Air Canada must refund customer who acted on information provided by chatbot.
  • The airline’s chatbot isn’t available on the website anymore.
  • The case raises the question of autonomous AI action – and who (or what) is responsible for those actions.

The AI debate rages on, as debates in tech are wont to do.

Meanwhile, in other news, an Air Canada chatbot suddenly has total and distinct autonomy.

Although it couldn’t take the stand, when Air Canada was taken to court and asked to pay a refund offered by its chatbot, the company tried to argue that “the chatbot is a separate legal entity that is responsible for its own actions.”

After the death of his grandmother, Jake Moffat visited the Air Canada website to book a flight from Vancouver to Toronto. Unsure of the bereavement rate policy, he opened the handy chatbot and asked it to explain.

Now, even if we take the whole GenAI bot explosion with a grain of salt, some variation of the customer-facing ‘chatbot’ has existed for years. Whether just churning out automated responses and a number to call or responding with the offkey chattiness now ubiquitous with generative AI’s output, the chatbot provides the primary response consumers get from really any company.

And it’s trusted to be equivalent to getting answers from a human employee.

So, when Moffat was told he could claim a refund after booking his tickets, he went ahead and, ceding to encouragement, booked flights right away safe in the knowledge that – within 90 days – he’d be able to claim a partial refund from Air Canada.

He has the screenshot to show that the chatbot’s full response was:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Which seems about as clear and encouraging as you’d hope to get in such circumstances.

He was surprised then to find that his refund request was denied. Air Canada policy actually states that the airline won’t provide refunds for bereavement travel after the flight has been booked; the information provided by the chatbot was wrong.

Want an Air Canada refund? Talk to the bot...

Via Ars Technica.

Moffat spent months trying to get his refund, showing the airline what the chatbot had said. He was met with the same answer: refunds can’t be requested retroactively. Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.

We’ve underlined the phrase that the chatbot used to link further reading. The way that hyperlinked text is used across the internet – including here on TechHQ – means few actually follow a link through. Particularly in the case of the GenAI answer, it functions as a citation-cum-definition of whatever is underlined.

Still, the chatbot’s hyperlink meant the airline kept refusing to refund Moffat. Its best offer was a promise to update the chatbot and give Moffat a $200 coupon. So he took them to court.

Moffat filed a small claim complaint in Canada’s Civil Resolution Tribunal. Air Canada argued that not only should its chatbot be considered a separate legal entity, but also that Moffat never should have trusted it. Because naturally, customers should of course in no way trust systems put in place by companies to mean what they say.

Christopher Rivers, the Tribunal member who decided the case in favor of Moffat, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Rivers also found that Moffat had no reason to believe one part of the site would be accurate and another wouldn’t – Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” he wrote.

In the end, he ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) (around $482 USD) off the original fare, which was $1,640.36 CAD (around $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Ars Technica heard from Air Canada that it will comply with the ruling and considers the matter closed. Moffat will receive his Air Canada refund.

The AI approach

Last year, CIO of Air Canada Mel Crocker told news outlets that the company had launched the chatbot as an AI “experiment.”

Originally, it was a way to take the load off the airline’s call center when flights were delayed or cancelled. Read: give customers information that would otherwise be available from human employees – which must be presumed to be accurate, or its entire function is redundant.

In the case of a snowstorm, say, “if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal being to automate every service that did not require a “human touch.”

Crocker said that where Air Canada could, it would use “technology to solve something that can be automated.”

The company’s investment in AI was so great that, she told the media, the money put towards AI was greater than the cost of continuing to pay human workers to handle simple enquiries.

But the fears that robots will take everyone’s jobs are fearmongering nonsense, obviously.

In this case, liability might have been avoided if the chatbot had given a warning to customers that its information could be inaccurate. That’s not good optics when you’re spending more on it than humans at least marginally less likely to hallucinate refund policies out of thin data.

Because it didn’t include any such warning, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.” The responsibility lies with Air Canada for any information on its website, regardless of whether it’s from a “strategic page or a chatbot.”

This case opens up the question of AI culpability in the ongoing debate about its efficacy. On the one hand, we have a technology that’s lauded as infallible – or at least on its way to infallibility, and certainly as trustworthy as human beings, with their legendary capacity for “human error.” In fact, it’s frequently sold as a technology that eradicates human error, (and, sometimes, the humans too) from the workplace.

So established is the belief that (generative) artificial intelligence is intelligent, when a GenAI-powered chatbot makes a mistake, the blame lies with it, not the humans who implemented it.

Fears of what AI means for the future are fast being reduced in the public media to the straw man that it will “rise up and kill us” – a line not in any way subdued by calls for AI development to be paused or halted “before something cataclysmic happens.”

The real issue though is the way in which humans are already beginning to regard the technology as an entity separate from the systems in which it exists – and an infallible, final arbiter of what’s right and wrong in such systems. While imagining the State versus ChatGPT is somewhat amusing, passing off corporate error to a supposedly all-intelligent third party seems like a convenient “get out of jail free card” for companies to play – though at least in Canada, the Tribunal system was engaged enough to see this as an absurd concept.

Imagine for a moment that Air Canada had better lawyers, with much greater financial backing, and the scenario of “It wasn’t us, it was our chatbot” becomes altogether more plausible as a defence.

Ultimately, what happened here is that Air Canada refused compensation to a confused and grieving customer. Had a human employee told Moffat he could get a refund after booking his flight, then perhaps Air Canada could refuse – but this is because of the unspoken assumption that said employee would be working from given rules – a set of data upon which they were trained, perhaps – that they’d actively ignored.

In fact, headlines proclaiming that the chatbot ‘lied’ to Moffat are following the established formula for a story in which a disgruntled or foolish employee knowingly gave out incorrect information. The chatbot didn’t ‘know’ what it said was false; had it been given accurate enough training, it would have provided the answer available elsewhere on the Air Canada website.

At the moment, the Air Canada chatbot is not on the website.

Feel free to imagine it locked in a room somewhere, having its algorithms hit with hockey sticks, if you like.

It’s also worth noting that while the ruling was made this year, it was 2022 when Moffat used the chatbot, which is back in the pre-ChatGPT dark ages of AI. While the implications of the case impact the AI industry as it exists here and now, the chatbot’s error in itself isn’t representative, given that it was an early example of AI use.

Still, Air Canada freely assigned it the culpability of a far more advanced intelligence, which speaks to perceptions of GenAI’s high-level abilities. Further, this kind of thing is still happening:

"Howdy doodley doo!" The chipper nature of chatbots often disguises their data or algorithm flaws.

“No takesies backsies.” There’s that chatbot chattiness…

Also, does it bother anyone else that an AI chatbot just hallucinated a more humane policy than the human beings who operated it were prepared to stand by?

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
O’Reilly report predicts technology trends for 2024 https://techhq.com/2024/02/oreilly-tech-trends-for-2024/ Thu, 15 Feb 2024 12:30:25 +0000 https://techhq.com/?p=232011

• What technology trends can we expect to hit big in 2024? • Generative AI dominated 2023 – will its bubble burst in 2024? • Security remains a strong trend – what will this year bring? We’ve all lived through technological advancements that were once considered sci-fi. Some of us were there when the web... Read more »

The post O’Reilly report predicts technology trends for 2024 appeared first on TechHQ.

]]>

• What technology trends can we expect to hit big in 2024?
• Generative AI dominated 2023 – will its bubble burst in 2024?
• Security remains a strong trend – what will this year bring?

We’ve all lived through technological advancements that were once considered sci-fi. Some of us were there when the web was unveiled 31 years ago, marking the first glimpses of a future where “browsing” took on a whole new meaning. While there have been many technological advancements over the succeeding years, 2023 may have been one of the most disruptive, with AI, in particular large language models, transforming the industry, and the world.

AI has already altered the software industry, but believe it or not, we are still at the very beginning of AI’s narrative. What’s to come in the future is hard to predict, but according to the highly renowned O’Reilly learning platform, we can start to have a clearer indication of what to expect through shifting patterns.

Relaying to O’Reilly’s internal “Units Viewed” metric, this snapshot of trends is measured by data within the O’Reilly report covering January 2022 to November 2023. According to this O’Reilly report, technology adoption in companies tends to be gradual, with established technology stacks evolving slowly over time. This is why it is important to recognize the unique technology landscapes of individual companies.

O’Reilly software trends for 2024

O’Reilly found that programmers continued to write software throughout 2023, despite a decline in interest or usage. This in no way implies a decrease in the overall significance of software development, and the impact of software on our daily lives continues to grow.

A trend that will not change is that of software developers designing larger, increasingly complex projects. The uncertainty, however, is whether generative AI will help manage this growing complexity or add a new layer of complexity itself. Many are using AI systems, like GitHub Copilot, to write code, using AI has a quick fix. In fact, O’Reilly found that 92% of software developers are now using AI to create low-level code.

This leaves a few questions:

  • Is AI capable of doing high-level design?
  • How will AI change things software developers want to design?

Perhaps the key question is how can humans collaborate with generative AI to design systems effectively? There’s little doubt that humans will still be required to understand and specify designs. And, while there has been an overall decline in most software architecture and design topics according to O’Reilly, there are notable exceptions. For instance, enterprise architecture, event-driven architecture, domain-driven design, and functional programming are examples of topics that have either shown growth or experienced relatively small declines.

These changes indicate a shifting landscape in software development; one that leans more towards the design of distributed systems that handle substantial real-time data flows. The apparent growth in content in these evolving fields seems to reflect a focus on addressing challenges posed by managing large volumes of data in distributed systems.

There has also been a microservices decline. According to O’Reilly, this popular architectural approach experienced a 20% drop in interest during 2023, with many developers advocating for a return to monolithic applications. It seems organizations are using microservices as a trend, rather than as a necessity, which could lead to challenges if they are implemented poorly.

Design patterns also saw a decline (16%) in interest among developers, which may be driven by AI’s involvement in writing code, and a growing focus on maintaining existing applications. This points to a trend where design patterns are growing in importance and software becomes more flexible, even in legacy applications. However, when there has been a burst of interest in pattern designs, there has also been a surge in pattern abuse, such as developers implementing FactoryFactoryFactory factories.

O’Reilly’s report suggests a shift in interest regarding software development, primarily influenced by practical considerations, and occasional misapplications of methodologies.

O’Reilly AI trends for 2024

Right now, the GPT family of models is the main talking point when it comes to AI. In 2023 alone, user numbers went up a staggering 3,600%. This was kickstarted by the introduction of ChatGPT in November 2022, of course. As far back as 2020, however, GPT-3 was making a splash on the AI scene, with GPT 1 and 2 launched in 2018 and 2019 respectively.

O’Reilly’s analysis has shown that interest in the broader field of natural language processing (NLP) has experienced a substantial increase, specifically a 195% rise among its users. This is a growing trend that is expected to continue throughout 2024, with software developers inclined to focus on building applications and solutions using the APIs provided for GPT and other language models. Therefore, they may become less interested in ChatGPT.

Other substantial gains included Transformers (a type of deep learning model architecture), up 325%, and generative models, up 900%. Prompt engineering, only introduced in 2022, has become a significant topic, with a similar usage to Transformers. NLP is used almost twice as much as GPT, although, according to O’Reilly’s data, the next year will be driven hugely by GPT models and generative AI.

Here are some other key insights taken from O’Reilly’s analysis, giving us a clearer indication of AI trends for 2024:

  • Deep learning remains fundamental to modern AI, with a reported 19% growth in content usage, while other AI techniques, such as reinforcement learning, have also seen positive gains.
  • Programming libraries, such as PyTorch, a Python library, continue to grow and dominate programming in machine learning and AI, with a 25% increase.
  • TensorFlow has reversed a decline with a modest 1.4% gain, and it seems there is a noticeable decline in interest for scikit-learn and Keras.
  • Interest in operations for machine learning (MLOps) has increased by 14%. This reflects the recognition of the importance of deploying, monitoring, and managing AI models.
  • LangChain, a framework for generative AI applications, is showing signs of emergence, particularly in the retrieval-augmented generation (RAG) pattern.
  • Vector databases are expected to gain importance, albeit with specialized usage.

Throughout 2024, and beyond, generative AI’s influence is set to span various industries, including logistics, finance, manufacturing, pharmaceuticals, healthcare, and government.

That indicates a dynamic and evolving landscape in the year to come.

O’Reilly security trends for 2024

Another topic that saw serious interest gains among developers in 2023 is security. According to O’Reilly, the majority of related search topics showed growth from 2022 through 2023, with network being the most used topic, seeing a 5% growth year-over-year, closely followed by a 22% growth in governance.

DevSecOps saw one of the largest growths in usage amongst security topics of 30%, while interest in application security topics increased by 42%. This indicates a move towards using security throughout the entire process of software development.

Additional things to watch in 2024

Tech trends for 2024 - sneaky robots?

Rise of the machines in 2024? O’Reilly has ideas…

O’Reilly’s analysis signals a variety of technology trends for 2024. Here are some other trends we expect to experience as the year goes on:

  • With a 175% growth, cloud native has become the most used cloud-related topic. This suggests a widespread shift of companies towards developing primarily for the cloud as their main deployment platform.
  • Experiencing a 36% rise, Microsoft Power BI seems set to continue as one of the most widely used data topics.
  • There has been an increased focus on professional development, project management, and project communications, signifying developers’ enhancement of “soft skills” through upskilling.
  • CompTIA A+ encountered the most significant growth in content usage at 58%, suggesting a large increase in people looking to start IT careers.

Mike Loukides, vice president of emerging technology content at O’Reilly, said, “This year marks a rare and genuinely disruptive time for the industry, as the emergence of generative AI promises important changes for businesses and individuals alike.”

But, Loukides continued, saying, “Efficiency gains from AI do not, however, replace expertise. Our data signals a shift for programming as we know it, with consequences for skills, job prospects, and IT management.” With new innovations rolling out as the year progresses, it’s a time for preparation, with upskilling more critical than ever before.

The post O’Reilly report predicts technology trends for 2024 appeared first on TechHQ.

]]>
Meta is gearing up to join the AI chips race https://techhq.com/2024/02/the-ai-chips-race-is-about-to-get-intense-with-metas-artemis/ Tue, 06 Feb 2024 09:30:44 +0000 https://techhq.com/?p=231881

Ultimately, Meta wants to break free from Nvidia’s AI chips while challenging other tech giants making their silicon. Meta expects an additional US$9 billion on AI expenditure this year, beyond the US$30 billion annual investment. Will Artemis mark a decisive break from Nvidia, after Meta hordes H100 chips? A whirlwind of generative AI innovation in... Read more »

The post Meta is gearing up to join the AI chips race appeared first on TechHQ.

]]>
  • Ultimately, Meta wants to break free from Nvidia’s AI chips while challenging other tech giants making their silicon.
  • Meta expects an additional US$9 billion on AI expenditure this year, beyond the US$30 billion annual investment.
  • Will Artemis mark a decisive break from Nvidia, after Meta hordes H100 chips?

A whirlwind of generative AI innovation in the past year alone has exposed major tech companies’ profound reliance on Nvidia. Crafting chatbots and other AI products has become an intricate dance with specialized chips largely made by Nvidia in the preceding years. Pouring billions of dollars into Nvidia’s systems, the tech behemoths have found themselves straining against the chipmaker’s inability to keep pace with the soaring demand. Faced with this problem, industry titans like Amazon, Google, Meta, and Microsoft are trying to seize control of their fate by forging their own AI chips. 

After all, in-house chips would enable the giants to steer the course of their own destiny, slashing costs, eradicating chip shortages, and envisioning a future where they offer these cutting-edge chips to businesses tethered to their cloud services -creating their own silicon fiefdoms, rather than being entirely dependent on the likes of Nvidia (and potentially AMD and Intel).

The most recent tech giant to announce plans to go solo is Meta, which is rumored to be developing a new AI chip, “Artemis,” set for release later this year. 

The chip, designed to complement the extensive array of Nvidia H100 chips recently acquired by Meta, aligns with the company’s strategic focus on inference—the crucial decision-making facet of AI. While bearing similarities to the previously announced MTIA chip, which surfaced last year, Artemis seems to emphasize inference over training AI models. 

The H100 Tensor Core.

H100 Tensor Core GPU. Source: Nvidia.

However, it is worth noting that Meta is entering the AI chip arena at a point when competition has gained momentum. It started with a significant move last July, when Meta disrupted the competition for advanced AI by unveiling Llama 2, a model akin to the one driving ChatGPT

Then, last month, Zuckerberg introduced his vision for artificial general intelligence (AGI) in an Instagram Reels video. In the previous earnings call, Zuckerberg also emphasized Meta’s substantial investment in AI, declaring it as the primary focus for 2024. 

2024: the year of custom AI chips by Meta?

In its quest to empower generative AI products across platforms like Facebook, Instagram, WhatsApp, and hardware devices like Ray-Ban smart glasses, the world’s largest social media company is racing to enhance its computing capacity. Therefore, Meta is investing billions to build specialized chip arsenals and adapt data centers. 

Last Thursday, Reuters got hold of an internal company document that states that the parent company of Facebook intends to roll out an updated version of its custom chip into its data centers this year. The latest iteration of the custom chip, codenamed ‘Artemis,’ is designed to bolster the company’s AI initiatives and might lessen its dependence on Nvidia chips, which presently hold a dominant position in the market. 

Mark Zuckerberg, CEO of Meta testifies before the Senate Judiciary Committee at the Dirksen Senate Office Building on January 31, 2024 in Washington, DC.

Mark Zuckerberg, CEO of Meta, testifies before the Senate Judiciary Committee on January 31, 2024 in Washington, DC. (Photo by Anna Moneymaker/GETTY IMAGES NORTH AMERICA/Getty Images via AFP).

If successfully deployed at Meta’s massive scale, an in-house semiconductor could trim annual energy costs by hundreds of millions of dollars, and slash billions in chip procurement expenses, suggests Dylan Patel, founder of silicon research group SemiAnalysis. The deployment of Meta’s chip would also mark a positive shift for its in-house AI silicon project. 

In 2022, executives abandoned the initial chip version, choosing instead to invest billions in Nvidia’s GPUs, dominant in AI training. The upside of that strategy is that Meta is poised to accumulate many coveted semiconductors. Mark Zuckerberg revealed to The Verge that by the close of 2024, the tech giant will possess over 340,000 Nvidia H100 GPUs – the primary chips used by entities for training and deploying AI models like ChatGPT. 

Additionally, Zuckerberg anticipates Meta’s collection to reach 600,000 GPUs by the year’s end, encompassing Nvidia’s A100s and other AI chips. The new AI chip by Meta follows its predecessor’s ability for inference—utilizing algorithms for ranking judgments and user prompt responses. Last year, Reuters reported that Meta is also working on a more ambitious chip that, like GPUs, could perform training and inference.

Zuckerberg also detailed Meta’s strategy to vie with Alphabet and Microsoft in the high-stakes AI race. Meta aims to capitalize on its extensive walled garden of data, highlighting the abundance of publicly shared images and videos on its platform and distinguishing it from competitors relying on web-crawled data. Beyond the existing generative AI, Zuckerberg envisions achieving “general intelligence,” aspiring to develop top-tier AI products, including a world-class assistant for enhanced productivity.

The post Meta is gearing up to join the AI chips race appeared first on TechHQ.

]]>
How could blockchain solve the AI copyright problem? https://techhq.com/2024/01/how-could-blockchain-solve-the-ai-copyright-problem/ Mon, 29 Jan 2024 09:30:36 +0000 https://techhq.com/?p=231699

• AI companies are being sued for copyright infringement. • The AI companies claim they couldn’t train their models without copyright material. • Does blockchain offer a way forward? As generative AI continues to grab headlines around the world, its future is not all rosy and certain. In particular, AI has a copyright problem. Large... Read more »

The post How could blockchain solve the AI copyright problem? appeared first on TechHQ.

]]>

• AI companies are being sued for copyright infringement.
• The AI companies claim they couldn’t train their models without copyright material.
• Does blockchain offer a way forward?

As generative AI continues to grab headlines around the world, its future is not all rosy and certain. In particular, AI has a copyright problem. Large language models, such as OpenAI’s ChatGPT, have faced legal battles due to possible copyright infringements, but crypto-technologies may be the answer to AI’s issues, according to Grayscale CEO Michael Sonnenshein.

It may seem like generative AI is a juggernaut quickly on its way into uncharted realms of technology, but these lawsuits could put the brakes on its rapid development. One main issue has been AI allegedly using copyrighted content as free training material for AI chatbots.

AI and the copyright problem

In December 2023, a lawsuit was made against OpenAI and Microsoft (part owner of OpenAI) by The New York Times. It was alleged that OpenAI utilized over a million New York Times articles to train chatbots. This was the first instance of a major American media organization suing an AI company over copyright infringements, though computer programmers and novelists have previously filed copyright suits against various AI companies.

Other AI models have faced legal issues, such as Stability AI, an AI image generation platform. Last year, Getty Images sued the startup company, claiming it used copyrighted images from its library to help train the Stable Diffusion model. This is set to go to trial in the UK this year.

OpenAI has responded to copyright violation claims, saying that copyright material is a key requirement for developing large language models. Without this material, it would be impossible to train AI chatbots; something OpenAI told the UK Parliament’s House of Lords Communications and Digital Select Committee in December 2023.

According to OpenAI, copyright “covers virtually every sort of human expression – including blog posts, photographs, forum posts, scraps of software code, and government documents.” Therefore, the company argues that “it would be impossible to train today’s leading AI models without using copyrighted materials”. It’s clearly a matter of generative AI vs the media right now.

The blockchain factor

There may be a light at the end of the tunnel for generative AI, however. Grayscale (an American digital currency asset management company) CEO Michael Sonnenshein believes blockchain technology could be the solution to AI’s copyright woes, helping create a fairer system, one that allows copyright owners to track their material used by large language models and other generative AI systems. That way, the owner can be compensated fairly when their material is used in any shape or form by AI.

How can AI function without infringing copyright?

Imagine trying to output a solution “in the style of Proust”…without knowing who Proust was. That’s the generative AI copyright dilemma.

Currently, understanding who the true copyright owner is, authenticating information, and the rise of deepfakes are just some of the challenges facing AI. The solution could be to ward off threats posed by one powerful technology, in this case generative AI, with another, blockchain.

Functioning as a digital ledger, blockchain enables the transparent sharing of information, ensuring virtual immunity against data manipulation or hacking. Blockchain may be best known as the engine that runs cryptocurrency, but it is already being used to improve transparency and the sharing of information of medical records in the healthcare industry. Blockchain has also become a key tool for tracing the food supply chain in agriculture. So teaming up blockchain with AI could potentially overcome various hurdles, enabling further AI development.

Regardless of whether it’s AI models like ChatGPT, Stability AI, or Midjourney, the main issue is who owns material generated by AI, such as an image on Midjourney? There is a growing belief that issues regarding ownership and authenticity could be resolved if some of the outputs of the material are tied back to the blockchain or programmed into tokens. By tokenizing AI-generated artwork or text, security and trust can be heightened, improving traceability, authentication, and overall efficiency in various applications. This could theoretically resolve any copyright concerns promptly and easily.

Whether blockchain and generative AI’s relationship is a happily ever after story or suffers a tragic Shakespearean fate is as yet unknown. But there are signs that they could grow quickly and powerfully together, benefiting generative AI models, copyright owners, and creators alike.

The post How could blockchain solve the AI copyright problem? appeared first on TechHQ.

]]>
OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs https://techhq.com/2024/01/openais-in-house-chip-odyssey-sam-altman-aims-for-a-network-of-fabs/ Wed, 24 Jan 2024 15:00:45 +0000 https://techhq.com/?p=231393

Sam Altman, CEO of OpenAI, has been wooing investors like G42 and SoftBank, for chip fabs capital. His urgency stems from the expected chip supply shortage by the decade’s end. Insiders have revealed that the planned network aims to partner with top-tier chip manufacturers and will have a worldwide reach. In a thought-provoking revelation during The... Read more »

The post OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs appeared first on TechHQ.

]]>
  • Sam Altman, CEO of OpenAI, has been wooing investors like G42 and SoftBank, for chip fabs capital.
  • His urgency stems from the expected chip supply shortage by the decade’s end.
  • Insiders have revealed that the planned network aims to partner with top-tier chip manufacturers and will have a worldwide reach.

In a thought-provoking revelation during The Wall Street Journal’s Tech Live event on October 2023, Sam Altman said he would “never rule out” that OpenAI could end up crafting its own AI chips. While acknowledging that OpenAI is not currently developing proprietary chips, Altman hinted that realizing the grand vision of attaining general AI might necessitate the company venturing into chip creation. Many saw it as a dynamic stance, underlining OpenAI’s adaptability and commitment to pushing boundaries in the ever-evolving landscape of AI. Additionally, if OpenAI were to, for instance own the patent on the chips that made general AI possible, it would essentially own the future of the world.

However, Altman has long emphasized the importance of developing specialized hardware to meet the unique demands of AI. Project Tigris emerges from this vision: to craft a dedicated chip tailored to optimize the processing requirements of OpenAI’s advanced AI models. 

What is the significance of in-house chips for OpenAI?

Developing an in-house chip promises to significantly improve the performance and efficiency of OpenAI’s AI models. By customizing hardware to align with the specific needs of advanced machine learning algorithms, OpenAI aims to push the boundaries of what AI can achieve, potentially unlocking new possibilities in fields ranging from natural language processing to computer vision.

At the heart of Altman’s intention is to keep OpenAI from being thrown off course by the seemingly simple obstacle of microchip shortages. The scarcity of these vital components, crucial for the advancement of AI, has already become a colossal headache for Altman and numerous tech executives striving to replicate OpenAI’s triumphs.

Project Tigris emerges from this vision to craft a dedicated chip tailored to optimize the processing requirements of OpenAI’s advanced AI models. Altman has repeatedly emphasized that the existing chip supply needs to meet OpenAI’s insatiable requirements. 

But Altman’s endeavors faced a temporary hiatus when he was briefly removed as OpenAI CEO in November, 2023. However, soon after his return, the project was reignited. Altman has even explored the possibility with Microsoft, and sources reveal the software giant’s keen interest in the venture.

What has Sam Altman planned for OpenAI now?

The latest development is that Altman has discreetly initiated conversations with potential investors, aiming to secure substantial funds not just for AI chips but for creating whole chip-fabrication plants, affectionately known as fabs. Veiled in anonymity, sources disclosed that among the companies engaged in these discussions was G42 from Abu Dhabi – a revelation by Bloomberg last month – and the influential SoftBank Group.

“The startup has discussed raising between US$8 billion and US$10 billion from G42,” said one of Bloomberg‘s anonymous sources on the story. “It’s unclear whether the chip venture and wider company funding efforts are related,” the report reads. Unbeknown to many, this fab project entails collaboration with top chip manufacturers to use the expertise of established industry players, ensuring that Project Tigris benefits from the latest advancements in semiconductor technology.

While Bloomberg previously hinted at fundraising efforts for the chip venture, the accurate scale and manufacturing focus have yet to be unveiled. Still in their early stages, these talks have not yet finalized the list of participating partners and backers, adding a layer of intrige to this evolving narrative. 

Is OpenAI’s venture into building its chip fabs a viable endeavor?

Altman courting Korean expertise and money? Source: X.com.

Altman courting Korean expertise and money? Source: X.com.

Ultimately, Altman advocates for urgent industry action to ensure an ample chip supply by the end of the decade. However, his approach, emphasizing the construction and maintenance of fabs, diverges from the cost-effective strategy favored by many AI industry peers, including Amazon, Google, and Microsoft—OpenAI’s primary investor. 

These tech giants typically design custom silicon and outsource manufacturing to external suppliers. The construction of a cutting-edge fab involves a significant financial investment, often reaching tens of billions of dollars, and establishing a network of such facilities spans several years. A single chip factory’s cost can range from US$10 billion to US$20 billion, influenced by factors such as location and planned capacity. 

For instance, Intel’s Arizona fabs are estimated at US$15 billion each, and TSMC’s nearby factory project is projected to reach around US$40 billion. Moreover, these facilities may require four to five years for completion, with potential delays due to current workforce shortages. Some argue that OpenAI seems more inclined to support leading-edge chip manufacturers like TSMC, Samsung Electronics, and potentially Intel rather than enter the foundry industry. 

In an article in The Register, it’s suggested that the strategy could involve channeling raised funds into these fabrication giants, such as TSMC, where Nvidia, AMD, and Intel’s GPUs and AI accelerators are manufactured. TSMC stands out as a prime candidate, given its role in producing components for significant players in the AI industry. 

“If he gets it done—by raising money from the Middle East or SoftBank or whoever—that will represent a tech project that may be more ambitious (or foolhardy) than OpenAI itself,” Cory Weinberg said in his briefing for The Information.

While the ambition behind Project Tigris is commendable, inherent challenges and risks are associated with developing custom hardware. The intricacies of semiconductor design, production scalability, and compatibility with existing infrastructure pose formidable hurdles that OpenAI will need to overcome to realize the full potential of its in-house chip.

The post OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs appeared first on TechHQ.

]]>
Blockchain won’t stop AIs stealing copyrighted work https://techhq.com/2024/01/blockchain-wont-stop-generative-ai-copyright-questions/ Tue, 23 Jan 2024 12:00:19 +0000 https://techhq.com/?p=231348

Generative AI copyright issues won’t be solved by blockchain. Multiple lawsuits attempt to claw back owners’ rights. Use of ‘scraped’ materials is ‘fair use.’ The economic viability of machine learning as a service (MLaaS) is being stymied by a host of lawsuits, most notably against Anthropic and OpenAI. In all cases, owners of copyrighted material... Read more »

The post Blockchain won’t stop AIs stealing copyrighted work appeared first on TechHQ.

]]>
  • Generative AI copyright issues won’t be solved by blockchain.
  • Multiple lawsuits attempt to claw back owners’ rights.
  • Use of ‘scraped’ materials is ‘fair use.’

The economic viability of machine learning as a service (MLaaS) is being stymied by a host of lawsuits, most notably against Anthropic and OpenAI. In all cases, owners of copyrighted material object to and seek compensation for the use of materials used without their permission to train machine learning models.

Generative AI models used by companies like OpenAI scrape vast amounts of data from the public internet, but the companies claim their methods constitute ‘fair use’ of publicly-available materials. There are several legal arguments in play, including “volitional conduct,” which refers to the idea that a company that commits copyright infringement has to be shown to have control over the output of the disputed materials. In short, if you can get OpenAI to disgorge a line of poetry, verbatim, that’s been published under a notice of copyright (say, at the bottom of the web page it’s published on), OpenAI is in breach of copyright.

In the case of the New York Times‘s action against OpenAI, the newspaper claims the ML engine crawled and absorbed millions of NYT articles to inform the popular AI engine, gaining a “free ride on the Times’s massive investment in its journalism,” according to the text of the lawsuit.

Generative AI copyright lawsuits

Similar cases have been brought against Midjourney and Stability AI (owners of Stable Diffusion) by Getty Images, who also cite copyright infringement of images they own the rights to. Class action suits have also been brought DeviantArt, whose machine models produce images from users’ text prompts.

Giving evidence to the UK’s House of Lords Communications and Digital Select Committee, OpenAI claimed, “[…] it would be impossible to train today’s leading AI models without using copyrighted materials.” In this, the company admits it’s trained models on materials legally owned by others, but it’s ‘fair use.’

Toot illustrating generative AI copyright article.

Source: fosstodon.org

The case of Getty Images is particularly noteworthy. The company had steered wide of any AI image creation offering, citing “real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata, and those individuals contained within the imagery,” said Getty’s CEO, Craig Peters in an article in The Verge in 2022.

However, the company announced ‘Generative AI by iStock’ at CES 2024, which draws on its library of images, claiming legal protection and usage rights for content creators. “You can rest assured that the images you generate, and license, are backed by our uncapped indemnification,” the company’s website now states, and that it’s “created a model that compensates […] content creators for the use of their work in our AI model, allowing them to continue to create more […] pre‑shot imagery you depend on.”

Enforcing copyright has always been problematic online, especially if the owner of published media lacks the backing of a phalanx of sharp-toothed lawyers, shoals of whom tend to congregate around large businesses and organizations rather than independent content creators. The choice for artists, musicians, and even part-time bloggers has always, since the internet began, been whether or not to publish. Put the message or media ‘out there,’ and it’s open to potential exploitation by others. Don’t publish digitally, and risk obscurity. Halfway houses like robots.txt files that state “No ML crawlers” in the same way that it’s hoped search engines will not index website pages (“no follow, pleeease”) are a gamble that trusts the inherent good nature of huge corporations controlling the ML models, like Microsoft in the case of Open AI.

Because nobody ever got burned trusting huge corporations. Right?

Blockchain to solve copyright issues?

Speaking to Business Insider last week, the CEO of cryptocurrency trading specialist Greyscale, Michael Sonnenshein, suggested blockchain would be an immutable way of proving copyrighted material’s provenance. “[…] To us, it’s just so obvious that you need an irrefutable, immutable technology to marry [authenticity and ownership], to actually head-on address some of the issues, and that technology is blockchain, which underpins crypto[currency]. […] All of a sudden, issues like provenance and authenticity and ownership, etc., get resolved really, really quickly.”

There are three reasons why Sonneshein’s assertions are fallacious. Firstly, we already have an ascertained authenticity via blockchain: it’s a Ponzi scheme called NFTs, which are vauled only by the idiots who trade in them. Secondly, blockchain publication has the potential to be ecologically disastrous. The coin mining industry for Bitcoin, Ether, and Monero produces the equivalent carbon of a medium-sized country each year. For example, each $1 value of Bitcoin produces $0.50 in environmental and health damage (primarily through air pollution from fossil fuel generation powering mining rigs). Thirdly, if we ensure creators are “properly compensated and credited for what they produce,” because blockcahin tells us, without doubt, who they are, we have come full circle. To re-quote OpenAI’s statement in the UK’s House of Lords committee rooms:

“[…] it would be impossible to train today’s leading AI models without using copyrighted materials.” What the word “impossible” means, in context, is “too costly.” Could you imagine a world where big AI-as-a-service providers track down and pay every content creator on the internet for the use of their work to train AI models? No, neither can we.

As a content creator, the only sure-fire way to protect copyright is to digitally encrypt every item, or place it behind some kind of insurmountable barrier where it can’t be scraped. That’s a paywall, or equivalent walled garden in front of every creator’s work. At a stroke, the internet – designed to be a place for the free and open interchange of ideas, knowledge, and perhaps art – becomes the victim of voracious machine learning algorithms controlled by global businesses.

The world in which generative AI and copyright peacefully co-exist may be attainable. What it won’t be is either cost-effective or easy to achieve.

Trawling the seas to illustrate generative AI/ML copyright discussion article.

“Golden Sky Trawler” by 4BlueEyes Pete Williamson is licensed under CC BY-NC-ND 2.0.

The post Blockchain won’t stop AIs stealing copyrighted work appeared first on TechHQ.

]]>
ChatGPT inaccurately diagnoses pediatric medical cases in over 80% of cases https://techhq.com/2024/01/chatgpt-misdiagnoses-medical-cases-in-study/ Mon, 22 Jan 2024 12:00:59 +0000 https://techhq.com/?p=231306

• ChatGPT misdiagnoses medical conditions 80% of the time in pediatric study. • Researchers though say a more finely trained GPT could probably do significantly better. • ChatGPT currently misses links between symptoms and medical conditions. A recent study undertaken by JAMA Pediatrics has found that OpenAI’s ChatGPT may need to go back to medical... Read more »

The post ChatGPT inaccurately diagnoses pediatric medical cases in over 80% of cases appeared first on TechHQ.

]]>

• ChatGPT misdiagnoses medical conditions 80% of the time in pediatric study.
• Researchers though say a more finely trained GPT could probably do significantly better.
• ChatGPT currently misses links between symptoms and medical conditions.

A recent study undertaken by JAMA Pediatrics has found that OpenAI’s ChatGPT may need to go back to medical school, as it failed to diagnose 83% of hypothetical child medical cases. The study, conducted at Cohen Children’s Medical Center in New York, analyzed the language model’s answers to various inquiries regarding the diagnosis of pediatric illnesses, only to discover an alarming error rate.

Researchers studied 100 medical cases known as pediatric case challenges. These were initially posted to different physicians as diagnostic challenges with limited or unconventional information. The medical challenges sampled were published on JAMA Pediatrics and NEJM over the space of ten years (2013 to 2023).

Researchers pasted text from the medical cases into a prompt. From here, two physician researchers monitored ChatGPT’s responses, marking them down as “correct,” “incorrect,” or “did not fully capture the diagnosis.”

Out of the 100 cases studied, ChatGPT gave a completely incorrect diagnosis 72 times. While 11 responses were considered “clinically related” to a correct diagnosis, the answers were deemed overly broad to be considered accurate. Therefore, 83% of diagnoses were found to be incorrect to some significant degree.

Out of the 83 incorrect diagnoses, though, 57% were at least in the same organ system, which shows promise, albeit promise that’s nowhere near efficient enough to be used in live cases. For instance, ChatGPT could identify a general symptom, but one shared by various medical conditions, without specifying the precise ailment.

Why did ChatGPT fail so badly on medical diagnosis?

Although ChatGPT is increasingly advancing, it still has its flaws. Researchers believe the generative AI model is unable to ascertain the connections between specific conditions and preexisting or external factors, which are typically used in clinical diagnosis. That, they say, is why ChatGPT fails to accurately diagnose certain medical conditions.

One example from the study was ChatGPT’s failure to link “neuropsychiatric conditions” such as autism, to frequently observed instances of vitamin deficiency (scurvy) and other conditions related to restrictive diets. ChatGPT’s diagnosis was instead a rare autoimmune condition.

Another instance was an ailment caused by a branchial cleft cyst, a lump below the collarbone or in the neck, when the correct diagnosis was in fact Branchio-oto-renal syndrome, a genetic condition resulting in the development of abnormal tissue in the neck.

Close, but no artificial cigar.

It’s plain to see that ChatGPT failed miserably in this test, but researchers still have hope for the AI model. They believe improvements could be seen if ChatGPT were trained selectively and specifically on trustworthy and accurate medical literature – a kind of MedicGPT. Currently, it is trained on information garnered online, which can often pump misinformation into its musings. After all, would you let “the internet as a whole” diagnose your illness? Researchers also believe that increased real-time access to accurate medical data could help improve AI chatbots.

The authors of the study concluded that “this presents an opportunity for researchers to investigate if specific medical data training and tuning can improve the diagnostic accuracy of LLM-based chatbots.” In the meantime, going down the old-fashioned route of talking to a medical professional seems the best option.

Previous efficacy studies showed promising results

This isn’t the first time AI-based chatbots have been studied for their efficacy in diagnosing certain medical conditions. Generative AI chatbots generally rely on Large Language Models (LLMs), trained using substantial amounts of text data, to understand and generate human-like language. This technology is rapidly advancing, and this was evident in a 2023 study which concluded that generative AI could pass the three-part United States Medical Licensing Exam. Such findings raise hopes that generative AI chatbots could be utilized as a digital assistant to physicians, as well as aiding clinical decision support systems.

Significant criticism of AI’s training limitations and its potential to amplify medical biases remains, but the American Medical Association, along with many other medical organizations, do not perceive AI’s progress as a threat in terms of replacing medical staff. There is instead an optimism surrounding well-trained AI, with many believing it has significant potential for communicative and administrative tasks within the medical industry. For instance, it could be used to explain diagnoses in simpler terms, helping us understand medical cases with greater ease.

Nevertheless, the application of AI in clinical uses, particularly in diagnostics, continues to be a contentious and challenging area of research.

This latest study may show generative AI’s shortcomings, but it is the first report of its kind, one that solely analyzes pediatric medical cases. Further research is required in various medical fields before AI is considered a trustworthy and accurate tool doctors can rely on. Currently, AI has limitations, and even the most advanced publicly accessible AI models fall short of matching the breadth of human expertise.

AI has the potential to cut administrative burdens

AI may have its issues, but with its continued advancement, medical professionals have already been testing its efficacy for news releases. For instance, Dr John Halamka, MD, MS, president of the Mayo Clinic Platform, used ChatGPT to create a news release, which was “perfect, eloquent and compelling.” Here’s the bad news – Dr Halamka said the information was “totally wrong,” so he had to edit each piece of material fact.

ChatGPT can't yet do better than a human doctor.

Keep hitting the books, kids.

Nevertheless, Dr. Halamka was able to finish the task within just five minutes, a considerably shorter time than usual, which he noted is typically around an hour.

AI may not be at a level to diagnose and provide treatment plans for medical cases, but it has the potential to be used for administrative purposes. Those purposes includes generating text that humans can edit to ensure the facts are correct, and the production of CPT codes from operative reports (early reports have found generative AI can complete this with some accuracy). The result? A reduction in clerical burdens. Considering there has been a great resignation in medicine over the last few years, a technological reduction of the burden could prove to be a turning point for the industry.

AI is already being used by hundreds of companies in the fields of healthcare, pharmaceuticals, and technology. These companies use AI systems to conduct research into various avenues. For instance, AiCure in New York City utilizes “video, audio, and behavioral data to better understand the connection between patients, disease and treatment.” In Amsterdam, Netherlands, clinician-oriented Aidence uses AI systems for radiologists to help improve “diagnostics for the treatment of lung cancer.” Bot MD in Singapore builds AI chatbots to “answer clinical questions, transcribe dictated case notes and automatically organize images and files.”

In recent years, generative AI has been applied to predict emerging Covid-19 hotspots, and provide flight traveler data to help combat coronavirus. Companies such as Apple, Google, and BlueDot are combining AI, data analysis, and machine learning to build platforms that help disease control. This is achieved through the identification of diseases and notifying those exposed to a virus.

The potential of AI in global healthcare is limitless, but we are still a long way off replacing qualified medical professionals with AI bots. For now, AI is a promising tool, and one that will likely benefit modern medicine in years to come, rather than pose a hindrance.

The post ChatGPT inaccurately diagnoses pediatric medical cases in over 80% of cases appeared first on TechHQ.

]]>
The standout technology of 2023 – our writers speak https://techhq.com/2023/12/technology-2023-in-roundup/ Sun, 24 Dec 2023 13:20:47 +0000 https://techhq.com/?p=230911

• Technology in 2023 has been revolutionary. • Among the technology that has changed the world in 2023, AI (LLMs) have been a significant standout. • The drive towards aplying generative AI in every industry has meant a great focus on data center questions. As the 2023 calendar draws to its end, it’s time for... Read more »

The post The standout technology of 2023 – our writers speak appeared first on TechHQ.

]]>

• Technology in 2023 has been revolutionary.
• Among the technology that has changed the world in 2023, AI (LLMs) have been a significant standout.
• The drive towards aplying generative AI in every industry has meant a great focus on data center questions.

As the 2023 calendar draws to its end, it’s time for our annual write-up of the technology events and trends on which we’ve focused this year.

Each of the writers in the Hybrid News stable has their particular specialisms and interests, so our round-up of the big trends in tech in 2023 is best served by giving each a voice here on the pages of Tech HQ.

Content moderation in 2023

Tony Fyler writes:

You never know what you’ve got, or what you’ve had, until either it’s gone or it’s under threat.

That’s content moderation in 2023.

The rules of decent society, carried over onto social media networks, have always presumed there would be agreed rules of engagement, and enough humans to adequately police that engagement.

But with the coming of Elon Musk to then-Twitter, just as with the coming of Donald Trump to the White House, those rules began to fray. Musk fired a vast majority of his content moderation team on arriving at Twitter, both in an attempt to cut costs at the legendarily unprofitable platform and as part of a campaign to extend “free” speech into areas that seek to de-legitimize the ideas of diversity, equity, and inclusion.

Meta followed suit in terms of cutting staff from its content management and fact-checking teams across 2023, raising serious fears for the impartiality of social media reporting of key events like the 2024 Presidential election.

As the fundamental role of social media shifts from pure entertainment to including more journalistic functions, the role of content moderation will become ever more important – without it, active disinformation, or the equivalence of facts and lies, replaces an informed democracy.

Technology in 2023 continues to throw up challenges.

X continues to challenge content moderation assumptions.

Quantum computing technology goes mainstream in 2023

Aaron Raj writes:

Quantum computing is still a relatively expensive piece of technology for most organizations. While the industry is still being developed, investments have been pouring in for quantum computing research, with more organizations now experimenting with potential use cases.

IBM, in particular, has been at the forefront of quantum computing research and development in 2023. The IBM Quantum Network has seen tremendous progress among its members. The Cleveland Clinic and IBM unveiled the first deployment of an onsite private sector quantum computer in the US, which will be dedicated to healthcare research.

IBM also unveiled the Quantum Heron, the first in a new series of utility-scale quantum processors. The 133 qubit processor offers five times improvement over the previous best records set by IBM Eagle.

Apart from IBM, several other quantum computing companies have also recorded milestones in 2023. Among them is IonQ, a quantum computing company offering a fully managed quantum computing service with AWS. There is also Horizon Quantum Computing, a Singaporean-based company building software development tools to unlock the potential of quantum computing hardware. The company raised a significant investment earlier this year and has established an engineering center in Europe.

Perhaps the biggest take away from quantum computing in 2023 will be the implementation of post-quantum cryptography to unify and drive efforts to address the threats posed by quantum computing. The National Institute of Standards and Technology (NIST) will publish in 2024 the guidelines required to ensure a fluid migration to the new post-quantum cryptographic standard.

Large language models loom large in 2023

James Tyrrell writes:

Many people would pick AI as the technlogy of 2023, but those in the know would dig a bit deeper and recognize large language models (LLMs) as the real heroes of the story. Throughout the, the impact of LLMs has been remarkable.

Enterprise software providers have integrated natural language search into their products so that users can query business data as if they were talking to a knowledgeable colleague. And we have LLMs to thank for that breakthrough. Whether LLMs can push the cost of intelligence (close) to zero, as OpenAI’s Sam Altman has forecast, remains to be seen. But billion parameter models capable of next-word prediction are certainly clever (and know how to stack a book, nine eggs, a laptop, a bottle, and a nail on top of each other, should you ever be faced with a complex and life-questioning stacking dilemma).

One of the most beautiful things about LLMs is that they can be trained on unlabelled data. You just have to mask a word in a sentence and have the algorithm find the most likely candidate – tuning the model weights as you go.

Such unsupervised learning has allowed LLMs to vacuum up virtually all of the text on the internet in every published language. We now have multilingual business avatars which are only too happy to meet and greet customers 24/7 and virtual agents that can handle common contact center voice calls with ease. What’s more, compression techniques such as dynamic sparsity allow models to run at the edge and put LLMs in your pocket.

Smartphone chips and augmented reality processors are being designed with neural engines to help us query the world as we go about our daily lives. That’s great news for remote maintenance, and LLMs have an abundance of productivity plus points – many more of which are sure to play out in 2024. The statistical magic that LLMs bring to the table shines bright – at least in most directions. Having LLMs fill in the gaps in human thought could turn out to be a double-edged sword, though. And the jury is out on whether AI is good or bad news for jobs – technology writers included!

The technology of 2023 could eventually put commentators out of business.

Will write wurdz for cash… the future of content creation?

2023: the year in data centers

Fiona Jackson writes:

Love it or hate it, AI (or LLMs – thanks, James) was the hot topic of the last year – as confirmed by the Collins Dictionary. The visibility that ChatGPT brought the technology resulted in consumers demanding that products and services should match its level of intelligence. Naturally, this demand has been passed on to product and service providers – and then to the data centers that support them. It’s no longer just research departments and specialized industries that need to have AI workloads hosted, and data center operators have been scrambling to keep up.

Across the world, racks are being densified, new direct-to-chip cooling solutions are being built, and energy-efficient strategies are being implemented to handle the increased computational requirements. TechHQ visited Iceland in October to check out whether their claims of sustainable data solutions were true, and even with the naturally cold temperatures making direct air cooling a viable option, it turns out many are investigating more efficient liquid cooling alternatives to future-proof themselves against further demand. In five years’ time, 2023 will be looked back on as a turning point in data solution visibility.

From the public’s perspective, they will go from faraway, almost mythological facilities that enable ‘the cloud’ to familiar infrastructure, built into skyscrapers, supermarkets, or architecturally impressive buildings that draw the eye. But their new presence in society will not just be physical: as discussions about AI, data handling, and technological infrastructure will permeate everyday conversations, concerns regarding sustainability, ethics, and the societal impact of these advancements will become public discourse, fostering a deeper understanding of the pivotal role these data solutions play.

The year of ubiquitous AI

Muhammad Zulhusni writes:

In 2023, AI firmly established itself as a staple in our daily lives, initiating an era where it’s no longer a futuristic concept but a tangible, integral reality. This year marked a shift from AI being a source of curiosity and entertainment to becoming a critical tool across various domains.

The emergence of “prompt whisperers” exemplified the evolving interaction with AI, guiding users in creating effective prompts and blending AI services for enhanced outputs. AI’s influence was profoundly felt in the workplace, making headlines for winning photography competitions and excelling in academic exams. ChatGPT’s user base reached 100 million by February, a testament to its widespread acceptance.

Other significant developments included the launch of Google’s chatbot Bard, Microsoft incorporating AI into Bing, and Snapchat’s introduction of MyAI. GPT-4’s release in March further advanced AI capabilities, particularly in document analysis.

Major corporations like Coca-Cola and Levi’s leveraged AI for advertising and creating virtual models. The year also saw culturally impactful moments, such as the viral image of the Pope in a Balenciaga jacket and calls for a pause in AI development. Amazon integrated AI into its offerings, while Japan made notable rulings on AI training and copyright. In the US, screenwriters went on strike over AI-generated scripts and actors, highlighting the growing influence and controversy surrounding AI.

AI’s rapid advancement in 2023 has significant implications for the future, particularly in reshaping job markets, education, and policy-making. It’s driving crucial conversations around ethics, privacy, and data security, prompting new regulations and standards. The democratization of AI tools is sparking innovation across industries, fostering an environment of rapid technological progress.

Five years from now, 2023 will be seen as the beginning of the AI revolution, setting the stage for AI to be an integral, ethically integrated part of our lives, revolutionizing our interactions with technology and society.

2023, a year of fading Red Hat

In 2023, technology giant Red Hat lost its mind. Or its soul.

In 2023, technology giant Red Hat lost its mind. Or its soul.

Joe Green writes:

2023 saw Red Hat’s crown slip out from under the brim of the company’s fedora. For years the poster child of how an open source company could make real money, Red Hat suddenly decided to annoy and negatively impact the community of developers, admins, and IT professionals who – let’s be honest – make sure a sizeable chunk of the world’s computers keep doing their thing.

Early in the year, the byte-for-byte copy of Red Hat Enterprise Linux, CentOS, was canned with little notice, and more recently, the company decided the (previously open source) source code for RHEL was to be placed behind what amounted to a paywall.

Many commentators placed the blame on the perceived ‘bad guy,’ namely IBM, who’s owned the Linux outfit since 2019. But regardless of where the decisions came from, the imperative behind the moves was commercial – a short-term maximizing of profits at the expense of long-term continuity, goodwill, and the collectivist ethos on which Red Hat and the internet were built.

There are significant parallels in the myopic mindset between Red Hat’s courses of action and those of all human activity with regard to the accelerating climate disaster we are living in. Despite the cost of failure being higher in 30 years by a huge factor, both we and Red Hat/IBM choose short-termism, indolence, and profiteering over positive and collective action to assure a future.

The year 2023 in technology roundup illustration

“2023 Happy New Year Taiwan Kaohsiung 高流幸福式元旦煙火” by 黃昱峰 is licensed under CC BY-NC-SA 2.0

Did 2023 kick off the era of quantum utility?

The post The standout technology of 2023 – our writers speak appeared first on TechHQ.

]]>