Big Data - TechHQ Technology and business Mon, 04 Mar 2024 13:25:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Inkitt: what happens when AI eats its own words? https://techhq.com/2024/03/ai-will-help-writers-create-literally-average-stories/ Mon, 04 Mar 2024 09:30:39 +0000 https://techhq.com/?p=232469

Inkitt AI help for writers shows successful patterns. Success delivered by what are proven to be winning formulae. We look forward to Fast & Furious 52‘s release in 2066. The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that... Read more »

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
  • Inkitt AI help for writers shows successful patterns.
  • Success delivered by what are proven to be winning formulae.
  • We look forward to Fast & Furious 52‘s release in 2066.

The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that work on behalf of its authors. The AI will guide submissions to the eponymous app in areas such as readability, plot, and characterization.

Self-publishing is hugely popular among authors. It circumvents the often-frustrating processes of finding an agent, receiving rejections from established publishing houses, and lessening any income from a work thanks to parties in the chain who each take a cut of revenues generated by sales. An AI-powered virtual assistant can help authors with advice and offer changes to a text that are drawn from previously successful stories.

Inkitt’s AI amalgamates the output from several large language models to find trends in the enormous body of previously published books, giving writers help to align their work with already successful and popular works. At first sight, its approach is clearly more appropriate than having ‘authors’ simply use an AI to create words for a book. It’s also a step above once-respected news outlets using AI to write stories. But a deeper understanding of how large language models work informs us that the boundaries of creativity possible with AI are claustrophobic.

AI help for writers illustration

“Cuba book” by @Doug88888 is licensed under CC BY-NC-SA 2.0.

Whether in video, visual art, game design, or text, machine learning algorithms are educated on extant publications. Over the period of the learning phase, they process large quantities of data, and learn patterns that can then be used to reproduce material similar to that in the body of learning data.

In the case of a novel or screenplay’s structure, then, what’s succeeded in the past (in terms of popularity and, often, revenue generated) can be teased out from the also-rans. It’s a process that is as old as creativity itself, albeit a habit that’s formed without digital algorithmic help. Hollywood industry experts can produce lists of formulae that work for the plot, the rhythm of narrative flow, characterization, and so on. Such lists, whether ephemeral or real, inform the commissioning and acceptance of new works that will have the best chance to succeed.

The threat to creativity from the models used in ways like that proposed by Inkitt is twofold. The most obvious is one of the repetition of successful formulae. This means, depending on your choice of language, works that are on-trend, derivative, zeitgeisty, or repetitious.

The second threat comes from the probability curves embedded into the AI code. The degree of exception from the average of any creative work chewed up by an algorithm will always be diminished. What can’t be judged particularly easily is what makes something an exception and whether it’s different from the average because it’s badly created or because it’s superbly created. Truly fantastic creative works may be given a lesser weight because they don’t conform to a number of other factors, like sentence length or a color palette that is (currently) familiar.

The effect is one of standardization and averaging across the gamut of creative output so that a product is successfully conformist to the mean. Standardization equals conforming, which equals success. But standardization leads inexorably to stagnation.

In practical uses of AI today, many of the traits and methods of models are perfect for their designed purpose. Data analytics of spending patterns informs vendors’ choices for new product development based on what sells well. Outliers and exceptions have little importance and are rightly ignored by the model’s probability curve.

But in areas of creating new art, product design, music composition, or text creation, the exceptions can have value, a value that is increased by not conforming to average patterns of success, readability, aesthetic attractiveness, characterization, or one of a thousand other variables at play. If conformity to guidelines means success, then how we define success is the interesting question. History is littered with composers, artists and writers who didn’t conform, and were succesful during their lifetimes or posthumously. Plenty too who were succesful conformists. And many who kicked against prevailing strictures and got nowhere, dying in poverty.

Will AI be able to give help to writers?

“book” by VV Nincic is licensed under CC BY 2.0.

So what help can AI actually deliver for writers? As in many areas of life and business, it can work well as a tool, but it cannot – or at least should not – be allowed to dictate the creative elements of art.

By reducing creativity to an algorithmically generated idea of “what works,” talent that’s non-conforming is immediately stymied. It depends, of course, on what the creator’s desired outcome is, or how they deem themselves to be succeful. If they want a greater chance of achieving mainstream popularity, then the Inkitt AI will help guide them in what to change to better fit into the milieu. Many dream of being the scriptwriter or 3D visual designer for the next movie blockbuster, and there is value in that. Inkitt may make people better writers, but it’s the individual’s idea of what a ‘better’ writer is that will inform their decision whether or not to sign up.

Individual human voices can make great creative works. But by placing those works inside a mass of mediocrity (and worse) and teaching an algorithm to imitate the mean, what’s produced is only ever, at best, going to be slightly better than average. As more content is created by AI and it too becomes part of the learning corpora of machine learning algorithms, AIs will become self-replicating, but not in the manner of dystopian sci-fi. Much of the future’s published content will just be very, very dull.

Oatmeal for breakfast, lunch, and dinner.

Amalgamating for the sellable mean turns tears of human creativity into nothing more than raindrops in the flood.

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
Data living in motion – Hammerspace https://techhq.com/2024/02/data-storage-for-ai-and-more-from-hammerspace/ Tue, 27 Feb 2024 09:30:54 +0000 https://techhq.com/?p=232320

• Data storage solutions underpin the forward-going technologies like generative AI and machine learning. • That means there’s a need for cleverer, more streamlined and ecological data storage. • Hammerspace  is making its HPC parallel file system available as an NAS. Hammerspace is the instantly accessible storage area in fiction, the imaginary extra dimension that... Read more »

The post Data living in motion – Hammerspace appeared first on TechHQ.

]]>

• Data storage solutions underpin the forward-going technologies like generative AI and machine learning.
• That means there’s a need for cleverer, more streamlined and ecological data storage.
• Hammerspace  is making its HPC parallel file system available as an NAS.

Hammerspace is the instantly accessible storage area in fiction, the imaginary extra dimension that hammers, say, appear from when Tom needs to set a trap for Jerry. It’s also the name of the US-based data storage company on a mission to change how people use their data.

What's a hammer if not a lot of data points - wood and metal data points - held in storage just where they're needed?

It’s Hammer time!

Data is always in motion and, much like an anvil pulled from behind the back just in time, should be accessible as-and-when needed. Unlike its eponymous cartoon concept though, Hammerspace the company doesn’t require you to suspend your disbelief.

Welcoming Brian Pawlowski (who you might know from his work at Quantum) as VP of performance engineering, late last week Hammerspace announced the availability of its HPC parallel file system as an enterprise NAS for the first time.

A response to the need for a new storage architecture in an increasingly data-driven world, Hammerspace answers to the demands of the next data cycle.

Clear gaps are emerging in the way data is being used: the storage of unstructured data for deep learning sees silos form and it becomes difficult to access and unify data sources; the high performance demands necessary to keep GPUs utilized aren’t met by existing NAS, which isn’t designed for large compute performance.

The evolution of storage architectures.

From Hammerspace.

We all know that AI and ML are generating waves that wash upon every industry, but when it comes to data, the implications are particularly huge: there isn’t AI without data, and the emerging industry’s demands are forcing a reckoning on data storage methods.

Hammerspace is simplifying data pipelines for these new technologies by aggregating data into a single file system that’s globally accessible. It’s also directly enabling the future of AI, which demands performance rates not met by legacy NAS architectures.

Hyperscale NAS architecture speeds time-to-market and time-to-insight. Read the technical brief from Hammerspace here.

“We have traditionally separated scale-out file systems, commonly known as parallel file systems, from NAS precisely due to the nature of their performance for very large HPC/ AI environments. As we enter into this next generation of AI, new technologies, particularly in data infrastructure, are needed,” said Camberley Bates, VP and practice lead at The Futurum Group.

Available globally and a world first, the Hammerspace data environment includes two broad sets of capabilities – hyperscale NAS and data orchestration. Available as a standard capability of the Hammerspace Global Data Environment and included in the cost of Hammerspace software licensing, the new offering answers to most data storage demands.

Data storage sustainability

While not every change in data storage systems directly pertains to sustainability, the data storage industry cannot be separated from its ecological implications.

Enabling the proliferation of AI by extension means increasing the strain on a climate emergency that’s already the realization of worst-case scenarios. The secrecy that surrounds data centers is one indicator – though, of course, security is also a factor in this – that best practice doesn’t always include the environmental angle.

Proving a cut above the rest must now mean proving a commitment to green initiatives. Hammerspace works to connect users with their data on any vendor’s data center storage or public cloud services so that their customers don’t have to compromise on the solution that they choose – a solution that should center environmental best practice.

Hammerspace has also expanded its third-party storage support to include tape storage, a method that supports ESG initiatives by reducing energy consumption and carbon footprint, but now doesn’t mean lengthier processing.

When it comes to hammer-based solutions…you probably have to be worthy of them.

The post Data living in motion – Hammerspace appeared first on TechHQ.

]]>
Vietnamese government starts collecting biometrics https://techhq.com/2024/02/biometric-data-id-cards-vietnam-government-dna-too/ Wed, 21 Feb 2024 12:30:30 +0000 https://techhq.com/?p=232233

• Vietnam is set to collect an enormous amount of biometric data from its citizens. • Security in the system will obviously be paramount. • The development seems likely to generate whole new waves of crime by bad actors – biocrime. Biometric data is increasingly used in technological security systems, yet retina scans and voice... Read more »

The post Vietnamese government starts collecting biometrics appeared first on TechHQ.

]]>

• Vietnam is set to collect an enormous amount of biometric data from its citizens.
• Security in the system will obviously be paramount.
• The development seems likely to generate whole new waves of crime by bad actors – biocrime.

Biometric data is increasingly used in technological security systems, yet retina scans and voice recognition still call to mind the hi-tech lairs of fictional villains. Face ID seems a lot less glam when you’re trying to pay for a bus ticket with your phone.

Biometric data is the key to many a sci-fi smash.

Minority Report speculates on surveillance systems in 2054. Tom Cruise is there, too.

In Vietnam, citizens can now expect to give the government a slew of their biometric data, per the request of Prime Minister Pham Minh Chinh. Collection of biometric data will begin in July this year following an amendment to the Law of Citizen Identification passed in November 2023.

The amendment allows the collection of biometric data and record of blood type and other related information.

The Ministry of Public Security will collect the data, working with other areas of government to merge the new identification system into the national database. The new identification system will use iris scans, voice recordings and even DNA samples.

Vietnamese citizens’ sensitive data will be stored in a national database and shared across agencies to allow them to “perform their functions and tasks.” We’re sure the sharing of highly personal data won’t encounter any issues – accidental or otherwise.

Regarding the method of collection, the amended law says:

“Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings, or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people’s voices are shared with identity management agencies for updating and adjusting to the identity database.”

Well, obviously.

Chairman of the National Defense and Security Committee, Le Tan Toi, has expressed the belief that a person’s iris is suitable for identification as it does not change over time and would serve as a basis for authenticating an identity.

As things currently stand, ID cards are issued to citizens older than 14, and aren’t mandatory for the six to 14 age range – though they can be issued if necessary. The new ID cards will look much the same but undergo several changes, not least the addition of holders’ biometric data.

They’ll incorporate the functions of some other ID documents too, including driver’s licenses, birth and marriage certificates, and health and social insurance documents. All of your personal information stored in the same place… What could go wrong?

Biometric data must be secured

Fingerprints on the ID card will be replaced by a QR code linked to the holder’s biometric and identifying data.

There are roughly 70 million adults in Vietnam, so the task of collecting the huge amount of data from them all will be no mean feat. In case you hadn’t got there yet: security will be paramount. The data on citizens is prime for identity theft; we might expect to see an increase in bad actor activity, including skimming to collect fingerprints from ATM machines.

Technology is always evolving, but it’s not necessarily guaranteed to evolve for the better. A group of researchers from China and America recently outlined a new attack surface, proposing a side-channel attack on the Automatic Fingerprint Identification system: “finger-swiping friction sounds can be captured by attackers online with a high possibility.”

Ensuring that the personal information of Vietnamese citizens is secure at every level is a responsibility the government must be prepared to take on.

There’s also the sticky issue of government surveillance that almost doesn’t bear thinking about. We’ll leave the tinfoil hat within reach.

From airport services to citizen ID…

The post Vietnamese government starts collecting biometrics appeared first on TechHQ.

]]>
Stefan Greifeneder tells us more about Dynatrace’s plans for the future https://techhq.com/2024/02/interview-stefan-greifeneder-carbon-neutrality-and-more-dynatrace-perform/ Fri, 09 Feb 2024 12:20:00 +0000 https://techhq.com/?p=231947

• The drive towards carbon neutrality is a key business goal in 2024. • Many carbon calculators though are vague and of limited value. • Dynatrace is working to deliver actionable clarity on carbon neutrality. On achieving carbon neutrality and giving GenAI the right information: we spoke with Stefan Greifeneder, VP of product management at... Read more »

The post Stefan Greifeneder tells us more about Dynatrace’s plans for the future appeared first on TechHQ.

]]>

• The drive towards carbon neutrality is a key business goal in 2024.
• Many carbon calculators though are vague and of limited value.
• Dynatrace is working to deliver actionable clarity on carbon neutrality.

On achieving carbon neutrality and giving GenAI the right information: we spoke with Stefan Greifeneder, VP of product management at Dynatrace, about the waves the company is making at Perform 2024.

Stefan Greifeneder of Dynatrace explained the role of Davis in driving businesses towards carbon neutrality in their processes.

Stefan Greifeneder, Dynatrace.

A major announcement was the addition of AI observability to the Dynatrace stack, while Davis, the existing AI solution at Dynatrace, got a GenAI upgrade.

Can you summarize what differentiates Davis AI from the static of the last year that was all about AI?

The core differentiation is that in our space of stability, one of the core challenges is the case of errors, pinpointing root causes, things that require precision. Before GenAI, we were using our solution Davis AI to do exactly that.

The differentiation here is that we’re combining parts of AI that we already used, which we call predictive and causal AI. The core of Davis AI is this causality thing, because when talking about customers facing problems in big cloud environments in IT and software, it’s massively complex, there are lots of data points, lots of different components.

If something goes wrong, if something breaks, then it’s not just one thing broken but a chain of events.

What we have, due on the one hand to our causal AI and how we collect the data, is an understanding of the dependencies between components. So, we understand how things relate to each other. And we can follow that path; this is something that’s only possible if you understand those dependencies, if you have this information available.

It can’t be done, and not at that precision, with pure GenAI approaches. That’s why we use what we call hypermodal AI, meaning we mix three types [of AI]. That’s predictive AI for precise predictions based on available data, causal AI that’s following pathways and then generative AI, which we use going forward for, for example, remediation recommendations.

What’s your answer to the lack of trust in AI? Do you think observability can solve that?

Good question. What we’re doing right now in our solution to help our customers resolve issues, is not yet using generative AI.

What we’re doing with the predictive and causal parts of AI is following a white paper approach. We have precisely documented what we’re doing. We even show visually in the product how we came to each conclusion – you could think of it as explainable AI.

What we’re doing right now can be easily reconstructed or reproduced; it’s a kind of deterministic result. It’s the precise result, not just the correlation. This is why for those use cases in our customer base, we feel high trust. All of our customers have been using Davis AI since we’ve had it – the trust there is very high.

Now, if we talk about that, specifically, I think this is about building trust. But it’s also about having the right information available for the generative AI to come to good conclusions. Imagine asking generative AI “show me my revenue of application XYZ yesterday.” It would be hard to get a precise answer. But if you weave in the data we have combined with our causal AI then we can engineer the prompt and ask much more precise questions.

In the end, if you use it successfully a couple of times then the trust will be there. Generative AI is more blackbox than predictive and causal AI are, so by mixing it I think the trust is higher.

Dynatrace is a very technical company. Do you find you get better traction with systems engineers than the non-technical folk?

We see the go-to market motion, the sales conversations, at different flight altitudes. It really depends who you’re talking to. For practitioners, for the engineers, for the tech guys, we of course have more technical stories about the immediate value of Dynatrace when they start using it for their specific use case.

When we go up the food chain in the company, the message changes. Our real strength is helping people who are responsible for many areas. It benefits the engineer, but the real strength is for the CIO who can resolve a lot of problems across the organization.

I think for value messages there’s no technical understanding necessary because ROI, TCO discussions or customer satisfaction all come without technical knowledge. Where technology comes into it is in understanding the differentiation we have.

Some of the messages we and some competitors have are pretty similar, to be honest: everybody’s talking about AI and so on. To really understand what the Dynatrace difference is requires some technical understanding.

The partnership with Lloyds banking group to help cut carbon emissions  – where do you see that going?

We all know ESG is a globally critical topic. Everyone’s looking at new regulations and it’s a very, very important issue. In our customer base, I think there are two angles. One is the ESG angle, carbon neutrality for example. The other one is cloud cost — those are tied tightly together, right?

What we’re doing with Lloyds is one example of what customers can choose to do; if they’re already using Dynatrace broadly, they get [the carbon impact data] on top – it’s not an additional priced item.

In comparison to other solutions, using Dynatrace means not only understanding carbon impact but optimizing it. You can see where you stand but also drill down and understand which applications, which parts of the infrastructure of your cloud are contributing in which way to your carbon footprint.

We see a great appetite in our customer base to apply it to their own companies.

If you head to the Dynatrace website, you can see the Carbon Impact demo for yourself.

The post Stefan Greifeneder tells us more about Dynatrace’s plans for the future appeared first on TechHQ.

]]>
A conversation with Dynatrace’s CTO https://techhq.com/2024/02/dynatrace-cto-bernd-greifeneder-causal-ai-and-other-stuff/ Fri, 09 Feb 2024 09:30:32 +0000 https://techhq.com/?p=231941

• Dynatrace can now deploy causal AI to deliver certainty of results. • This fits a particular niche of need for enterprises that GenAI can’t deliver. • It’s also delivering a carbon calculator that goes beyond standard, vague models. From causal AI to harsh deletion; after a run of exciting announcements at Perform 2024, we... Read more »

The post A conversation with Dynatrace’s CTO appeared first on TechHQ.

]]>

• Dynatrace can now deploy causal AI to deliver certainty of results.
• This fits a particular niche of need for enterprises that GenAI can’t deliver.
• It’s also delivering a carbon calculator that goes beyond standard, vague models.

From causal AI to harsh deletion; after a run of exciting announcements at Perform 2024, we spoke to Dynatrace’s CTO and co-founder, Bernd Greifeneder, to get some insight on the technology behind the observability platform.

As the “tech guy,” how do you approach the marketing side of things? How do you get across the importance of Dynatrace to those who don’t “get” the tech?

Right now we are on that journey – actually, this Perform is the first one explicitly messaging to executives. It’s worked out great, I’m getting fantastic feedback. We also ran breakout sessions with Q&A’s on this three by three matrix to drive innovation by topics like business analytics, cloud modernization and user experience.

Then, we have the cost optimization because every executive needs to fund something. I can explain ten ways to reduce tool sprawl alone with Dynatrace. Cloud cost coupled with carbon is obviously a big topic, and the third layer is risk mitigation.

No one can afford an outage, no one can afford a security breach – we help with both.

How do you sell causal AI?

Bernd Greifeneder presented Dynatrace’s new products on the mainstage at Perform 2024.

Executives have always asked me how to get to the next level of use cases. I think that’s another opportunity; in the past we were mostly focused on middle management. If we first give executives the value proposition, they can go down to the next level of scale, implementing the use cases they wanted.

The other aspect is extending to the left. It’s more than bridging development with middle management, because you can’t leave it just to developers. You still need DevOps and platform engineering to maintain consistency and think about the bigger picture. Otherwise it’s a disaster!

How has changing governance around data sovereignty affected Dynatrace clients – if at all?

[At Perform 2024, Bernd announced Dynatrace OpenPipeline, a single pipeline for petabyte-scale ingestion of data into the Dynatrace platform, fuelling secure and cost-effective analytics, AI, and automation – THQ.]

Well, we have lots of engagements on the data side – governance and privacy. For instance, with OpenPipeline it’s all about privacy because when customers just collect data it’s hard to avoid it being transported.

It’s best not to capture or store it, but in a production environment you have to. We can qualify out the data at our agent level and maintain interest in it throughout the pipeline. We have detection of what is sensitive data to ensure it isn’t stored – when it is, say if analytics require it to be, you have custom account names on the platform.

That means you can inform specific customers when an issue was found and fixed, but still have proper access control.

We also allow harsh deletion; the competition offers soft deletion only. The difference is that although soft deletion marks something as deleted, it’s still actually there.

Dynatrace’s hard deletion enables the highest standard of compliance in data privacy. Obviously, in the bigger scheme of Grail in the platform, we have lots of certifications from HIPAA and others on data governance and data privacy.

[Dynatrace has used AI on its platform for years; this year it’s adding a genAI assistant to the stack and introducing an AI observability platform for their customers – THQ.]

What makes your use of AI different from what’s already out there? How are you working to dispel mistrust?

Would you want to get into an autonomous car run by ChatGPT? Of course not, we don’t trust it. You never know what’s coming – and that’s exactly the issue. That’s why Dynatrace’s Davis hypermodal AI is a combination of predictive, causal and generative AI.

Generative AI is the latest addition to Davis, intended as an assistant for humans, not to drive automation. The issue is the indeterminism of GenAI: you never know what you’ll get, and you can’t repeat the same thing with it over and over. That’s why you can’t automate with it, or at least automate in the sense of driving a car.

What does it mean then for running operations? For a company, isn’t this like driving a car? It can’t go down, it can’t be insecure, it can’t be too risky. This is where causal AI is the exact opposite of nondeterministic, meaning Dynatrace’s Davis causal AI produces the same results over and over, if given the same prompts.

It’s based on actual facts. It’s about causation not correlation, really inferring. In realtime, a graph is created so you can clearly see dependencies.

For example, you can identify the database that had a leak and caused a password to be compromised and know for certain that a problem arose from this – that’s the precision only causal AI provides.

Generative AI might be able to identify a high probability that the database leak caused the issue, but it would also think maybe it came from that other source.

This is also why all the automation that Dynatrace does is based on such high-quality data. The key differentiator is the contextual analytics. We feed this high-quality, contextual data into Davis and causal AI helps drive standard automation so customers can run their production environments in a way that lets them sleep well.

Observability is another way of building that trust – your AI observability platform lets customers see where it’s implemented and where it isn’t working.

Yeah, customers are starting to implement in the hope that generative AI will solve problems for them. With a lot of it, no one really knows how helpful it is. We know from ChatGPT that there is some value there, but you need to observe it because you never know what it’s doing.

Because of its nondeterministic nature, you never know what it’s doing performance wise and cost wise, output wise.

What about the partnership with Lloyds? Where do you see that going?

Especially for Dynatrace, the topic of sustainability and FinOps go hand in hand and continue to rapidly grow. We’ve also implemented sophisticated algorithms to precisely calculate carbon, which is really challenging.

Here’s a story that demonstrates how challenging it is: enterprise companies need to fulfil stewardship requirements. To do so, they might hire another company that’s known in the market to help with carbon calculation.

But the way they do it is to apply a factor to the amount the enterprise spends with AWS or Google Cloud, say, and provide a lump sum of carbon emissions – how can you optimize that?

The result is totally inaccurate, too, because some companies negotiate better deals with hyperscalers; the money spent doesn’t exactly correlate to usage. You need deep observability to know where the key carbon consumption is, whether those areas truly need to be run the way they are.

We apply that to this detailed, very granular information of millions of monitored entities. With Lloyds, for example, optimization allowed a cut of 75 grams of carbon per user transaction, which ultimately adds up to more and more.

Our full coverage of Dynatrace Perform is here, and in the next part of this article, you can read a conversation with Dynatrace VP of marketing Stefan Greifender.

The post A conversation with Dynatrace’s CTO appeared first on TechHQ.

]]>
Dynatrace Perform: here’s what you missed https://techhq.com/2024/02/dynatrace-observability-solutions-perform-2024-new-announcements/ Thu, 08 Feb 2024 12:00:23 +0000 https://techhq.com/?p=231905

• Observability solutions are key to making sound tech choices in 2024. • In particular, without observability solutions, it can be hard to gauge the success of your GenAI investments. • Dynatrace at Vegas announced its latest solutions to the problems enterprises are facing. Last week the world went to Las Vegas – where Dynatrace... Read more »

The post Dynatrace Perform: here’s what you missed appeared first on TechHQ.

]]>

• Observability solutions are key to making sound tech choices in 2024.
• In particular, without observability solutions, it can be hard to gauge the success of your GenAI investments.
• Dynatrace at Vegas announced its latest solutions to the problems enterprises are facing.

Last week the world went to Las Vegas – where Dynatrace held its annual conference. If you missed it, don’t worry too much: we can’t quite recreate the atmosphere of the main stage, but there were a ton of exciting announcements to share.

Dynatrace started out as just a product but over the years has grown into the giant it is today, making waves as an all-purpose business analysis and business intelligence platform. After a year of huge digital transformation and industry-wide tectonic shifts, it was only logical to make this year’s conference theme “Make waves.”

Making waves on observability solutions - Dynatrace, center stage.

It’s easy to broadly gesture at the changes the past year saw, but Rick McConnell, Dynatrace CEO, summarized some megatrends that the whole industry bought into.

The Dynatrace difference?

Rick McConnell explains why Dynatrace is different.

Cloud modernization. We’ve all established cloud services – kinda. The tech industry is, as of right now, 20% cloud-based. Outside of that, the world is only 10% there; the scale of cloud services at this stage has seen a reduction in cost and improved customer satisfaction and user experience.

Hyperscaler growth. Having grown by 50%, it’s no wonder that tis area has hit $200bn annual revenue.

Artificial Intelligence. Yup, unsurprisingly. Dynatrace’s insights aren’t unsurprising, though: AI became as ubiquitous as digital transformation in 2023, growing to be an essential pillar in business.

The new essential? Generative AI as a pillar of business.

Threat protection. AI is crucial to driving this. Threat protection is paramount in the current climate borne of exploding workloads and ways of accessing them: many professionals don’t realize that their business is operating on a huge great threat landscape.

It shouldn’t need explaining why cloud management is so important, then. Not only should it be closely monitored but optimized, too. Optimizing cloud cost isn’t only an economics game as the climate emergency makes environmental cost key to the bigger picture.

Bernd Greifeneder, Dynatrace CTO, admitted that often, environmental impact is the concern of younger engineers. That doesn’t mean they should be left alone to their green coding!

Observability solutions reveal sobering data.

With stats like this, shouldn’t older engineers be worried too?

Optimized sustainability is enabled by visibility; you can’t change what you can’t measure. Observability solutions solve the visibility issue, but what else can be done?

That’s where the first big announcement comes in: Dynatrace is teaming up with Lloyds Banking Group to reduce IT carbon emissions. Using insights and feedback from Lloyds Banking Group, Dynatrace will further develop Dynatrace® Carbon Impact.

Joining with a bank to save the environment. No, really...

The app translates usage metrics – the likes of CPU, memory, disk, and network I/O – into CO2 equivalents (CO2e). That measurement is unique to Dynatrace, which stands out above other offerings that use only vague calculations to deliver a number representing CO2 usage.

With Dynatrace, energy and CO2 consumption is detailed per source, with in-app filters allowing users to narrow focus to high-impact areas, and actionable guidance provided to help reduce overall IT carbon footprint.

Our interview with Bernd details this more closely.

Observability solutions for artificial intelligence

We all expected AI announcements to some extent. What Dynatrace revealed, though, is a market-first: observability for AI.

So far, Dynatrace has stood out from the pack with its own AI usage: the Davis Hypermodal AI model with a core of predictive and causal AI has established user trust, something that the industry in general has seen lost to the hallucinations of generative AI.

Observability solutions for AI.

The new Davis copilot, which utilizes GenAI, is offered alongside the existing core to let customers ask questions in natural language and get deep custom analysis in under 60 seconds.

Not all companies have applied AI capabilities in such a thoughtful way. In fact, in the rush not to be left behind, businesses have utilized AI – for lack of better phrasing – willy-nilly.

So, the leader in unified observability and security has stepped in to offer an extension of its analytics and automation platform to provide observability and security for LLMs and genAI-powered applications.

There’s money in AI, but businesses have no way of measuring return on their AI application – how can you tell whether GenAI is providing what your company needs?

Dynatrace AI Observability uses Davis AI to deliver exactly the overview necessary to ensure organizations can identify performance bottlenecks and root causes automatically, while providing their own customers with the improved user experience that AI offers.

With the not-so-small issue of privacy and security in the realm of AI, Dynatrace also ensures users’ compliance with regulations and governance by letting them to trace the origins of their apps’ output. Finally, costs can be forecast and controlled using Dynatrace’s AI Observability by monitoring token consumption.

Organizations can’t afford to ignore the potential of generative AI. Without comprehensive AI observability solutions, though, how can their generative AI investments succeed? How else would they avoid the risk of unpredictable behaviors, AI “hallucinations,” and bad user experiences?

Just how high is AI at any given moment? Observability solutions are important if you're building your business success on the technology.

If you’re skeptical about accepting this truth from the company selling the product, Eve Psalti from Microsoft AI gave her own advice on adopting the new technology: start small! Application needs to be iterated and optimized – by definition, LLMs are large! It’s literally the first L of the acronym. Observability, then, is the answer.

Achieving observability with Dynatrace

Ensuring that observability is complete means that any question should be answerable, near instantly. Scale should be adoptable in an easy, frictionless way. In the business of trust, Dynatrace provides low-touch automated response.

The complexity of the current environment and struggles with data ingest, including maintaining consistency, security, and keeping cost down have a solution: Dynatrace OpenPipeline.

The third announcement, saved for day two of proceedings, aims to be a data pump for up to 1000 terabytes of data a day. The new core technology provides customers with a single pipeline (hence the name, funnily enough) to manage petabyte-scale data ingestion into the Dynatrace platform.

Hey, you, get onto my cloud...

Hey, you, get onto my cloud… Things the Rolling Stones never knew.

Gartner studies show that modern workloads are generating increasing volumes of telemetry, from a variety of sources. “The cost and complexity associated with managing this data can be more than $10 million per year in large enterprises.”

The list of capabilities is extensive: petabyte scale data analytics; unified data ingest; real-time data analytics on ingest; full data context; controls for privacy and security; cost-effective data management.

One particularly exciting offering, though, is the ease of data deletion. Few appreciate how difficult complete data deletion is, but with Dynatrace they may never have to learn. Data disappears at a click.

All external data that goes onto the Dynatrace platform is also vetted for quality, so observability comes with the assurance that everything is of high enough quality.

Customers can vouch for Dynatrace solutions

Don’t just take it from us, though. Throughout the event – peppered, by the way, with Vegas vibez – we heard from Dynatrace customers to get an idea of how companies can benefit from the various packages on offer.

One such client is Village Roadshow Entertainment – the name might ring a bell for movie lovers. The Australia-based company had been using disparate tools and was “plagued” by background issues; unable to identify what was causing system implosions, a ‘switch it off and on again’ approach meant almost weekly IT catastrophes.

Luckily, by the time the Barbenheimer flashpoint hit cinemas globally, Village Roadshow had Dynatrace’s help. The platform helped truncate unnecessary data, speeding up processes and allowing things to run smoothly – for cinema staff and customers – on the biggest day for the industry in years.

Just as importantly, onboarding with Dynatrace was smooth and didn’t require too much upskilling – a side of IT that is critical to businesses, but not acknowledged by many departments.

The Grail unified storage solution at Dynatrace’s core provides not just a way to visualize data but to make data exploration accessible to everyone, regardless of skill level and work style.

Yes, but can it scale?

With templates enabling users to build an observability dashboard that makes sense to them, Dynatrace’s new interactive user interface coupled with Davis AI means deep analysis is available even to novices.

Segmentation allows data to be split into manageable chunks and organized contextually. Decision-making is thus accelerated, and the enhancements offered by Dynatrace speed up onboarding.

We were also lucky enough to speak to Alex Hibbitt of albelli Photobox, another Dynatrace client – he was awarded Advocate of the Year on the final day of the conference.Advocate of the year, Alex Hibbitt.

Five years ago, when he joined Photobox, the company was using a cloud platform that was primarily lift and shift. Now, Alex says the company’s on the road to building a truly cloud-native ecommerce platform to power what Photobox does.

“As an organization who sells people’s memories… the customer journey, the customer experience is really, really important to us, [and] drives fundamentals of how we make money.”

Before Dynatrace, huge amounts of legacy technology coupled with efforts to go cloud native created a behemoth that only a few engineers – Alex being one of them – had the context necessary to understanding it.

For the sake of his sanity, something had to change: Alex “couldn’t be on call 24/7 all the time – it was just exhausting.”

What Photobox needed was a technology partner that could cover the old and the new but, beyond the traditional monitoring paradigm, provide something truly democratized and take some of the strain off engineers.

Having already covered Dynatrace’s observability solutions – newly announced and not – it should be clear why they were the solution for Photobox’s issues and why Alex advocates for Dynatrace!

There’s more coverage on Dynatrace Perform on its way: come back for interviews with Bernd Greifeneder and Stefan Greifeneder.

 

The post Dynatrace Perform: here’s what you missed appeared first on TechHQ.

]]>
How does PayPal work? Axing staff, and focusing on users’ data https://techhq.com/2024/02/how-does-paypal-work-its-stock-price-back-up-to-pre-2022-levels/ Fri, 02 Feb 2024 15:30:24 +0000 https://techhq.com/?p=231812

How does PayPal work its stock price back up to 2022 levels? Company’s new CEO announces raft of new features. Hyper-personalization from its own and 3rd-party sources. Four months into his tenure as CEO of PayPal, Alex Chriss has announced a continuation of job losses from the company. The latest round of cuts will see... Read more »

The post How does PayPal work? Axing staff, and focusing on users’ data appeared first on TechHQ.

]]>
  • How does PayPal work its stock price back up to 2022 levels?
  • Company’s new CEO announces raft of new features.
  • Hyper-personalization from its own and 3rd-party sources.

Four months into his tenure as CEO of PayPal, Alex Chriss has announced a continuation of job losses from the company. The latest round of cuts will see the company’s workforce decrease by 9%, which will result in around 2,500 redundancies. These will come from a relative freeze on new recruitment as well as from existing positions.

In an all-staff memo dated January 30 and published on the PayPal website, Chriss stated that “we must execute faster and ensure we are focused on solving our customers’ most critical needs and problems. Specifically, across our organization, we need to drive more focus and efficiency, deploy automation, and consolidate our technology to reduce complexity and duplication.”

Chriss also said that “these decisions were not easy to make,” and that consultations about job losses with staff would take place where local laws make doing so inescapable.

PayPal joins the AI bandwagon

In his first major announcement since taking up the reins at the end of September last year, the CEO showcased several new products, including a “one-click guest checkout experience,” Fastlane, which lets customers to pay for goods online without registering with the merchant. Customers with their details stored in the Fastlane service are identified at the point of purchase by email address, confirm their identity (by 2FA), and then tap once to pay [image]. According to PayPal, Fastlane can recognize 70% of guests who are purchasing from sellers’ sites.

A further feature slated for the near future is ‘Smart Receipts,’ electronic receipts emailed to shoppers that contain recommendations from the merchant they’ve just used, appropriate to their recent purchases. That means merchants can allow PayPal’s algorithm access to their SKUs’ details, enabling the AI to make ‘smart recommendations.’ Nearly half (45%) of PayPal’s customers open their email receipts after a purchase, making the medium a perfect vehicle for further commercial messages.

The “AI-powered suggestions” will also be based on information on a shopper’s behavior, drawn from third-party data sources, in addition to the data PayPal has access to. The same technology behind ‘Smart Receipts’ will also be used in the ‘Advanced Offers Platform,’ which will grant any merchant “the ability to reach customers based on what they have actually bought […], down to the stock keeping unit (SKU) and the individual product,” to present them with offers specific to each customer. Merchants will pay PayPal for each purchase from a customer who takes up an offer rather than for impressions or abandoned click-throughs. If shoppers don’t want the data PayPal holds on their choices to be shared with merchants, they may opt out.

PayPal’s market position

So how does PayPal work outside its core markets? It currently processes transactions for around a quarter of the world’s online transactions and has 35 million active merchant accounts. It’s worth noting that online payments tend to go through other payment processors in the four major markets of China, Japan, South Korea, and India. With a good portion of potentially fertile ground already occupied by local providers (AliPay, Paytm, Mobikwik, Rakuten, Jkopay, et al.) that are able to offer services more attuned to local cultural practices (buy-now-pay-later, cash on delivery, etc.), how does PayPal find the space – and the demand – to work its techno-commercial magic?

Simple. Not original in any sense, but simple. PayPal is looking to the trove of payment data to which it already has access from its existing users to bring value for its stockholders.

In addition to charging merchants and shoppers varying fees for the use of Paypal as a payment platform, it also gains access to an increasing amount of users’ data, including shopping patterns, preferences, and other markers, which it can amalgamate with other information available on the open data markets.

From its unique position of trust created when online payments were in their nascent phase, and its now near-ubiquity, PayPal can use the information to which it is party and monetize it, taking revenue from end-users keen to present or take up, for example, special offers. With access to highly detailed information (“down to the stock keeping unit […] and the individual product”), PayPal can present an amazingly attractive offering to the data markets.

Losing 9% of its workforce will enable the company to streamline its balance sheet in the short- to mid-term as it builds on exploiting its digital reserves. Losing staff could, therefore, be seen as part of a transition to a changing business model. That’s little solace to those clearing their desks, of course, and PayPal has also paid a short-term price: its stock value has fallen by 20% in recent months.

But investors will likely see a bounce-back in the next few years as the company settles into a more prominent role as a data broker that also happens to process online payments.

The post How does PayPal work? Axing staff, and focusing on users’ data appeared first on TechHQ.

]]>
Even more cyberattacks on hospitals! https://techhq.com/2024/01/cyberattacks-on-hospitals-have-long-lasting-effects/ Tue, 23 Jan 2024 15:00:44 +0000 https://techhq.com/?p=231344

• Cyberattacks on hospitals are on the rise. • A Thanksgiving attack included a cancer center. • Cyberattacks on hospitals are relatively easy, due to a mixture of legacy tech and staggered digital transformation. Cyberattacks on hospitals have become an increased threat in recent years. Although the technology used in operating theaters is top of... Read more »

The post Even more cyberattacks on hospitals! appeared first on TechHQ.

]]>

• Cyberattacks on hospitals are on the rise.
• A Thanksgiving attack included a cancer center.
• Cyberattacks on hospitals are relatively easy, due to a mixture of legacy tech and staggered digital transformation.

Cyberattacks on hospitals have become an increased threat in recent years. Although the technology used in operating theaters is top of the range and carefully checked, over on the admin side, a combination of rushed digital transformation and legacy software leaves a huge attack surface wide open.

Happy Thanksgiving

On the morning of Thanksgiving 2023, Ardent Health Services took its services offline following a ransomware attack. That wasn’t the only cyberattack on a hospital: the Fred Hutchinson Cancer Center was also targeted by cybercriminals.

Although the attack on Ardent had instantaneous effect, the cyberattack on Fred Hutchinson didn’t immediately have clear implications. Teams noticed some “unauthorized activity” on “limited parts” of the healthcare system’s clinical network, according to Christina VerHeul, the organization’s associate vice president of communications.

In the immediate aftermath, VerHeul said “The reality is, we don’t know to what extent information has been obtained, nor any of the details of what that information is.”

The investigation ran on into this year and now the effects of the cyberattack on the hospital are being felt. The personal information of roughly 1 million patients was leaked, leading to email threats from hackers and escalating menacing messages.

Patients are receiving “swatting” threats and spam emails warning that unless a fee is paid, patients’ names, Social Security and phone numbers, medical history, lab results and insurance history will be sold to data brokers and on black markets.

Steve Bernd, a spokesperson for FBI Seattle, said last week there’s been no indication of any criminal swatting events, which occur when a bogus claim is made to law enforcement so that emergency response officers, like SWAT teams, show up at a person’s home.

Fred Hutchinson patient JM has been inundated with spam emails since the breach. In an email to the Seattle Times, he credits Fred Hutchinson with saving his life after his diagnosis of follicular lymphoma over 10 years ago.

Cyberattacks on hospitals.

How low can you go? Stealing data from cancer patients low?

“I have absolutely nothing bad to say about the facility and the providers in it,” JM wrote. “But this cyberhack has got me way spooked.”

That being said, the center’s communication efforts haven’t been up to scratch. JM hasn’t received direct responses to his requests for information about the data leak.

Since the hack, Fred Hutchinson has sent notifications through MyChart to patients, posted updates on its online FAQ page, and mailed letters out to patients, said VerHeul. Apparently, investigations have revealed the breach accessed patient information between November 19th and 25th.

Cyberattacks on hospitals add stress to recovering patients

Cyberattacks on hospitals take different forms: when Ardent was hit, hospitals had to close to emergency patients, putting lives at risk. In the case of the Fred Hutchinson Center, all clinics remained open following the attack but patients have been the direct targets of bad actors.

The cyberattack primarily impacted clinical data of former and current Fred Hutchinson patients, although the information of some UW Medicine patients was also leaked, according to hospital leaders.

While many details about the breach are still under investigation, Fred Hutchinson has said it believes hackers “exploited a vulnerability” in a workspace software called Citrix that allowed them to gain access to its network.

The weakness is known as the “Citrix Bleed” and federal security teams say it allows threat actors to bypass password requirements and multifactor authentication measures.

Cybersecurity is rarely taken seriously in sectors that don’t consider themselves to be at risk; sensitive personal data managed by hospital systems should be treated more carefully, and that means investing.

The post Even more cyberattacks on hospitals! appeared first on TechHQ.

]]>
Google’s first data center in the UK: a billion-dollar tech investment https://techhq.com/2024/01/google-billion-dollar-uk-data-center-unveiled/ Mon, 22 Jan 2024 15:00:00 +0000 https://techhq.com/?p=231319

The data center will be the first to be operated by Google in the UK. Google’s 2022 deal with ENGIE adds 100MW wind energy. The aim is for 90% carbon-free UK operations by 2025. In the ever-evolving landscape of cloud computing, Google Cloud is a formidable player, shaping the global data center market with its... Read more »

The post Google’s first data center in the UK: a billion-dollar tech investment appeared first on TechHQ.

]]>
  • The data center will be the first to be operated by Google in the UK.
  • Google’s 2022 deal with ENGIE adds 100MW wind energy.
  • The aim is for 90% carbon-free UK operations by 2025.

In the ever-evolving landscape of cloud computing, Google Cloud is a formidable player, shaping the global data center market with its leading solutions and heavyweight presence. Google Cloud’s commitment to expanding its global footprint is exemplified by its recent announcement of a US$1 billion investment in a new data center in Waltham Cross, Hertfordshire, UK. 

The move not only underscores the company’s dedication to meeting the needs of its European customer base, but also aligns with the UK government’s vision of fostering technological leadership on the global stage. As it is, one of the critical pillars of Google Cloud’s presence in the UK is its substantial investment in cutting-edge data infrastructure. That said, the upcoming data center would be Google’s first in the country.

Illustration of Google's new UK data Centre in Waltham Cross, Hertfordshire. The 33-acre site will create construction and technical jobs for the local community. Source: Google

Illustration of Google’s new UK data Centre in Waltham Cross, Hertfordshire. Source: Google.

“As more individuals embrace the opportunities of the digital economy and AI-driven technologies enhance productivity, creativity, health, and scientific advancements, investing in the necessary technical infrastructure becomes crucial,” Debbie Weinstein, VP of Google and managing director of Google UK & Ireland, said in a statement last week.

In short, this investment will provide vital computing capacity, supporting AI innovation and ensuring dependable digital services for Google Cloud customers and users in the UK and beyond.

Google already operates data centers in various European locations, including the Netherlands, Denmark, Finland, Belgium, and Ireland, where its European headquarters are situated. The company already has a workforce of over 7,000 people in Britain.

Google Cloud’s impact extends far beyond physical infrastructure, though. The company’s cloud services have become integral to businesses across various sectors in the UK. From startups to enterprises, organizations are using Google Cloud’s scalable and flexible solutions to drive efficiency, enhance collaboration, and accelerate innovation

The comprehensive nature of Google Cloud’s offerings, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), ensures that it caters to the diverse needs of the UK’s business landscape.

That said, the investment in Google’s Waltham Cross data center is part of the company’s ongoing commitment to the UK. It follows other significant assets, such as the US$1 billion acquisition of a Central Saint Giles office in 2022, a development in King’s Cross, and the launch of the Accessibility Discovery Centre, fostering accessible tech across the UK.

“Looking beyond our office spaces, we’re connecting nations through projects like the Grace Hopper subsea cable, linking the UK with the United States and Spain,” Weinstein noted.

“In 2021, we expanded the Google Digital Garage training program with a new AI-focused curriculum, ensuring more Brits can harness the opportunities presented by this transformative technology,” Weinstein concluded. 

Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak's tech leadership ambitions. Source: Google.

Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak’s tech leadership ambitions. Source: Google.

24/7 Carbon-free energy by 2030

Google Cloud’s commitment to sustainability also aligns seamlessly with the UK’s environmental goals. The company has been at the forefront of implementing green practices in its data centers, emphasizing energy efficiency and carbon neutrality. “As a pioneer in computing infrastructure, Google’s data centers are some of the most efficient in the world. We’ve set out our ambitious goal to run all of our data centers and campuses on carbon-free energy (CFE), every hour of every day by 2030,” it said.

This aligns with the UK’s ambitious targets to reduce carbon emissions, creating a synergy beyond technological innovation. Google forged a partnership with ENGIE for offshore wind energy from the Moray West wind farm in Scotland, adding 100 MW to the grid and propelling its UK operations towards 90% carbon-free energy by 2025. 

Beyond that, the tech giant said it is delving into groundbreaking solutions, exploring the potential of harnessing data center heat for off-site recovery and benefiting local communities by sharing warmth with nearby homes and businesses.

The post Google’s first data center in the UK: a billion-dollar tech investment appeared first on TechHQ.

]]>
Learning to code wasn’t enough as the job market crashes https://techhq.com/2024/01/software-engineer-job-market-not-good-rise-of-ai/ Thu, 11 Jan 2024 12:00:39 +0000 https://techhq.com/?p=231034

• The software engineer job market is slumping badly. • “Learn to code” advice overtaken by generative AI. • Which safe sector will feel the floor ripped out from under it next? The software engineer job market has hit a wall. Only six percent of people in the market are confident they’d be able to... Read more »

The post Learning to code wasn’t enough as the job market crashes appeared first on TechHQ.

]]>

• The software engineer job market is slumping badly.
• “Learn to code” advice overtaken by generative AI.
• Which safe sector will feel the floor ripped out from under it next?

The software engineer job market has hit a wall.

Only six percent of people in the market are confident they’d be able to get another job with the same pay. Historically, studying to become a software engineer was pretty much a pass to job security, a sure bet in an ever-changing job market – why is that no longer the case?

At one time, entry-level Google software engineers reportedly earned almost $200,000 a year plus benefits. High demand for engineers meant jobs were never in short supply.

That’s all beginning to change amidst growing competition for software jobs. Partly due to an industry-wide downturn and the growing threat of AI, a once saturated job market is now anything but.

The software engineer job market is much less secure in the age of generative AI.

The software engineer job market is much less secure in the age of generative AI.

The state of the software engineer job market

Speaking to Motherboard, unemployed software engineer Joe Forzano said “the amount of competition is insane.” Since losing his job in March, he’s applied to over 250 jobs; “it has been very, very rough.”

According to a December survey of 9,338 software engineers carried out by Blind, an online anonymous platform for verified employees, almost nine in ten software engineers said it’s harder to get a job now than pre-pandemic. 66% say it’s “much harder.”

Even over the last year, just under 80% of respondents said the job market has gotten even more competitive. Of the respondents, only 6% felt that were they to lose their current jobs they’d be rehired at the same rate of pay.

It may come as a surprise to some the widespread drop in hiring across the tech industry hadn’t already hit the software engineer job market. Over 2022 and 2023, more than 400,000 layoffs as companies feeling the end of the pandemic boom batten down the hatches and operate on the barest possible bones.

Until now, staff in non-technical fields felt the brunt of job cuts. The Wall Street Journal reported that while tech companies cut their recruiting teams by 50%, only 10% of engineering departments were sacked.

The job prospects were so much better as an engineer in the tech sector, that as others began to worry about the security of their roles, “learn to code” became a mocking rejoinder; if you can do something useful, you’ll always be needed.

Bloomberg writes that at Salesforce, engineers were four times less likely to lose their jobs than those in marketing and sales – a trend evident at Dell and Zoom, too.

A computer science degree will set you back quite significantly – for Forzano, it landed him in $180,000 debt. He, like countless others, considered this small fry: “the whole concept was [that] it was a good investment to have that ‘Ivy League degree’ in an engineering field.”

Making back the money would be no problem.

Except that post-pandemic “it’s a completely different landscape.” Forzano tells Motherboard that looking back, his decision to major in computer science was “very naïve.”

The unstoppable specter of AI

Alongside sector-wide cutbacks, the entrance of AI has had repercussions for the software engineer job market. In fact, one of the first waves of generative AI programs to take off were ones that allow users to write code using natural language or auto-complete code.

Last year, Google CEO Sundar Pichai said AI-powered coding tools had sped up the time it took workers to complete code by six percent. An article in the Atlantic, “So Much for ‘Learn to Code,” made the claim in September that computer science is no longer the safe major.

Perhaps slightly ego-driven, in December software engineers didn’t express much concern about their own jobs being made redundant by AI: only 28% said they were concerned. Beyond themselves, though, more than 60% of those surveyed said they thought their company would hire fewer people because of AI.

It’s impossible not to find a touch of schadenfreude that the self-congratulatory ‘you should’ve been clever, like me, and learned to code’ crowd are feeling the uncertainty that both humanities students and their colleagues in the tech industry have faced, but obviously any human losing to AI is a big no.

The irony isn’t lost on we journalists, though. After all, how long will it be until roles like “writer” are taken over by robots and AI?

The post Learning to code wasn’t enough as the job market crashes appeared first on TechHQ.

]]>