Hardware - TechHQ Technology and business Mon, 22 Apr 2024 14:47:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Electronics recycling – cheese waste has a taste for gold https://techhq.com/2024/03/electronics-recycling-cheese-waste-has-a-taste-for-gold/ Mon, 04 Mar 2024 16:34:09 +0000 https://techhq.com/?p=232503

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold. The... Read more »

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold.

The business of extracting gold and other valuable materials from electronic junk is growing and is even becoming a popular side hustle. One tonne of electronic circuit boards can yield in the region of 0.150 kg of gold, and over double that in silver. Plus, there’s likely to be anywhere from 250 – 300 kg of copper up for grabs per tonne of e-waste.

For years, device users have been throwing away – collectively – billions of dollars in precious metals as they dispose of unwanted electronics.

In the beginning, e-waste was sent overseas to become somebody else problem. But processing e-waste has the potential to be many times more lucrative (and much less polluting) than trying to extract gold and other precious metals from ore mined from the ground.

The ability for environmental clean-up operations to turn a profit is seeing a wave of new e-waste recycling solutions enter the market. And for those can run their operations at scale, there’s money to be made in turning e-waste into gold.

One of the most ingenious approaches – which is still at an early stage, but generating commercially promising results – uses spongey nanofibrils created from a by-product of cheese-making to soak up gold ions in solution and turn them into flakes.

Demonstrating the potential of their approach, researchers at ETH Zurich in Switzerland used their cheese waste creation to obtain a 450 mg gold nugget from 20 junk motherboards. According to the team, the material was 90.8 percent pure (21-22 carats), which values the reclaimed gold at around USD 28 – based on today’s scrap gold price.

What’s more, the group claims that the cost of the source materials and energy costs for the process represents just 1/50th of the value of the gold extracted from the e-waste.

Googling ‘how to turn e-waste into gold’ produces plenty of search hits, but many of the recipes feature toxic chemicals. However, by employing a bio-derived ionic sponge, the ETH Zurich researchers believe that they’ve found a gentler path to converting unwanted electronics into valuable materials. And they are not the only ones pursuing more environmentally friendly e-waste processing.

Mint Innovation, whose vision is to become the world’s leading provider of circular green metals, opened a commercial-scale facility in Sydney, Australia, in 2022. According to reports, the operation can salvage USD 85,000 in gold per day from recycled electronics – as well as being able to recover copper and other valuable metals.

Cleverly, Mint’s process –which was developed in New Zealand – makes use of bacteria and fungi that have evolved in regions rich in mine works and abandoned machinery. The organic soup is capable of absorbing metals and Mint exploits those properties to process e-waste in a more environmentally friendly way, compared with conventional methods.

According to Mint, everything leaving its plant is benign, which means that there are no chemical waste streams to be dealt with. And there’s more good news as the process is applicable to other waste stream such as used batteries and catalysts.

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>
Are smartphones a social issue? https://techhq.com/2024/02/should-we-ban-smartphone-use-for-kids-or-adults-too/ Thu, 29 Feb 2024 09:30:05 +0000 https://techhq.com/?p=232365

• Attitudes to smartphone use are shifting significantly. • Some parental campaigners want significant bands on screen-time for children. • One French town has banned scrolling in public – for everyone! Smartphone use may be in its biggest decline since the devices entered common use. Or at least, attitudes towards how the technology impacts day-to-day... Read more »

The post Are smartphones a social issue? appeared first on TechHQ.

]]>

• Attitudes to smartphone use are shifting significantly.
• Some parental campaigners want significant bands on screen-time for children.
• One French town has banned scrolling in public – for everyone!

Smartphone use may be in its biggest decline since the devices entered common use. Or at least, attitudes towards how the technology impacts day-to-day life are increasingly unfavorable.

Back when owning an iPhone was almost as inconceivable for most people as trying out the Vision Pro is today, the idea that living for 30 days without a smartphone or any other device being within the self-challenge subgenre of movie would have been laughable.

Now, when the idea of going out for a day without a smartphone in your pocket seems almost ludicrous, a movie by Alex Lykos, following his experience of going cold turkey off his devices and called Disconnect Me was just released.

Could you live a month without your smartphone use? Would you be sane at the end of it?

Alex Lykos lived for a month without his smart devices. Via IMDB.

As much as it reeks of Western privileges (although really, others in the genre like Supersize Me could be accused of the same), Disconnect Me attempts to identify the alienation that disconnecting now represents. Although the Guardian’s review of the movie criticizes some sweeping comments made by Lykos about the impact of technology on children, it concedes that “Lykos gives an affable and personal survey of different issues associated with smartphone use, from self-esteem to attention span.”

And we are far from short of evidence that the technologically driven world we inhabit has repercussions for every generation.

We all know smartphone use is bad for kids

The well-documented ill-effects of using smartphones and other technology from a young age range from physical strain and eye troubles to increased rates of unhappiness in younger and younger children.

Digital screen time is linked to the development of myopia in children and teenagers, and is also linked to dry eye syndrome, digital eyestrain and poor head and neck postures.

Does smartphone use lead to eyestrain? Why yes. Yes, it does.

Via mykidsvision.org.

A slew of lawsuits recently filed against Meta also shine a light on the types of content to which children are exposed on social media, and the impact it has on their wellbeing. One such lawsuit in the US state of New Mexico alleges that Meta “proactively served and directed [children] to egregious, sexually explicit images through recommended users and posts – even where the child has expressed no interest in this content.”

Whistleblower Frances Haugen revealed internal studies showed platforms like Instagram led children to anorexia-related content.

Over in the UK, the government’s Department for Education has confirmed plans to ban use of mobile phones in English schools, issuing statutory guidance on how to do so – guidance that unions have said is already in place in a vast majority of schools: try having a conversation with a child while they scroll TikTok if you can’t imagine why.

Esther Ghey, the mother of Brianna Ghey, a schoolgirl who was murdered on February 11, 2023, believes her daughter was vulnerable after spending so much time online. This month, she’s called for  a complete ban on social media access for under-16s.

All that and the alarming links between the time children spend on smartphones and social media and the likelihood they’ll experience bullying, problems of low self-esteem and even self-harm, mean it’s easy to understand why smartphones aren’t conducive to a learning atmosphere.

Research from the London School of Economics found test scores for schoolchildren in Birmingham, London, Leicester and Manchester rose when their schools introduced mobile phone bans.

Making some kind of change to improve all of this isn’t an unpopular idea.

Thousands of UK parents have joined calls for a smartphone-free childhood led by two mothers in response to their fears around the norm of giving children smart devices when they go to secondary school (aged 11 or 12).

After Clare Ferynhough and Daisy Greenwell’s WhatsApp group Smartphone-Free Childhood was promoted on Instagram, over a thousand other parents joined overnight.

Smartphones expose children to a “world that they are not ready for” because they can access pornography and content on self-harm and suicide, which can have a detrimental impact on their mental health, Fernyhough said.

Shocked by the support, Fernyhough said she’d thought it was an “extreme view,” forming the group for solidarity among a minority. This view isn’t exactly surprising: Ofcom research found that 91% of children in the UK own a smartphone by the time they’re 11 and 44% by the time they’re nine.

Changing this is the only way to combat smartphone use in children; being the only one without a smartphone in a class full of other children with one would be alienating and unfair. “That’s a nightmare and no one will do that to their child. But if 20%, 30%, even 50% of kids are turning up with parents making that decision, they are in a much better position.”

Not good for adults, either

We might be keen to overlook the negative effects of smartphone use on young people because we so want to ignore them in ourselves. Sure, children are playing outside less, but have you noticed the quieter streets from around the edges of your own smartphone?

A teen with one earphone constantly in is less conspicuously rude if you’re distracted by your own scrolling.

One French village has decided to take this all more seriously, banning people from scrolling their phones in public. From a hairdresser in the village, Ludivine, a cardiologist, told the Guardian that “everyone is struggling with too much screen time.”

Smartphone use - is it harmful for adults too?

Signs outside a school in Seine-Port. Via the Guardian

Seine-Port has a population below 2,000 and voted yes in a referendum to restrict smartphone use in public. The rules for children are stricter: no screens of any kind in the morning, no screens in bedrooms, before bed or during meals.

If parents of teenagers sign a written agreement not to provide their child with a smartphone before the age of 15, the town hall will provide the child with a handset for calls only – the old-fashioned sort.

A postal worker from the town, Gabriel, said he’s against the move. The 20-year-old said that he spends five hours a day on his phone, “which [he thinks] is reasonable.”

“You can’t ban knowledge at your fingertips.”

All of this does indicate a shift in attitudes towards what was once welcomed as the key to a better future. Post-covid, there was some pushback against the move online caused by social distancing and there’s reluctance from many to buy into the smartphone ordering and payment systems that cropped up as restaurants and shops reopened.

Further, as more and more gets done by smartphones, proof that they’re superior to the ‘real’ things they replace gets harder to show. Sure, the GPS on your phone makes navigating a new city far simpler than using a map did, but the iBeer gimmick got old fast.

People are increasingly dubious about online services being offered in place of “real” ones – and of the companies that own them. The reign of the tech genius is very much over – heralded by Musk and Zuckerberg’s distinct uncoolness – and, tentatively, screens are falling from favor.

That’s not to say anyone’s about to ditch their smartphone, but the concept of an online future is, at the very brink of being realized, less and less appealing.

The post Are smartphones a social issue? appeared first on TechHQ.

]]>
Connectivity cuts profits for utilities corporations https://techhq.com/2024/02/transmission-lines-america-utilities-companies-lobbying-against/ Wed, 28 Feb 2024 09:30:11 +0000 https://techhq.com/?p=232356

• Technically, the US has a shortage of transmission lines. • Interregional transmission lines would help provision – but potentially hit utility profits. • Power Vs. profit – the ultimate American standoff… Without the power grid, there would be nothing. Or at least nothing for us to write about, and nothing to write it on;... Read more »

The post Connectivity cuts profits for utilities corporations appeared first on TechHQ.

]]>

• Technically, the US has a shortage of transmission lines.
• Interregional transmission lines would help provision – but potentially hit utility profits.
• Power Vs. profit – the ultimate American standoff…

Without the power grid, there would be nothing. Or at least nothing for us to write about, and nothing to write it on; the human condition has become reliant on electricity for more or less everything. So, the system that provides it and ensures it gets across the country, must be well-planned and beneficial to as many people as possible. Right?

That depends on where you are: there aren’t enough transmission lines in the US to connect regional power networks, driving up the cost of electricity, reducing grid reliability and hampering the deployment of renewable energy.

High voltage transmission lines are what move large amounts of energy across long distances, linking power generation to power consumption. If done right, the transmission network contains a web of connections that create a reliable, redundant power supply system of huge scale.

Electricity makes money for utility companies who, being good capitalists with shareholders to satisfy, want to keep hold of as much of it as possible. That means they refuse to pursue (potentially expensive) interregional transmission projects and go as far as actively impeding them, because new projects threaten their profits and disrupt industry alliances.

Utility companies are lobbying against reforms that would lose them money: addressing transmission shortages has long been on the agenda in Washington, but utility firm lobbying continue to ensure delays.

As things currently stand, around 40 corporations own the vast majority of transmission lines in America. Their hold on the backbone of US grids should be scrutinized.

With more transmission lines comes more capacity and connectivity, letting new power plants connect and more power to move between transmission networks. Utility companies don’t want that kind of competition, or for their allies to lose regional control – and so transmission expansion is something they oppose.

The existing transmission networks across America were built largely during the last century by for-profit companies. Nonprofit utility providers organized by governments and communities had some part in it too – but by comparison, a very minor one.

The geographical equations of transmission lines

It makes sense that transmission lines tend to be concentrated around fossil fuel reserves and population centers, but there’s another force at play, deciding where the lines are routed: historic utilities alliances.

Where agreements were made between companies to trade energy, sufficient transmission was built that would allow power to move between their local service territories. Over time, alliances have expanded but there are still non-allied utility companies with comparatively very weak connections.

Expansion opens opportunities for new power plant and transmission developers to undercut profits, taking control over the rules shaping the industry. The value of linking networks is widely accepted around the world – but it doesn’t make money for the American companies currently in control of the grid.

Connecting regional networks is critical to the incorporation of renewable energy. For example, four proposed high voltage lines totaling 600km along the seam of regional networks in the upper Midwest would connect at least 28 gigawatts of wind and solar energy. Although the plans have been around for years, utility companies in neighboring regions haven’t moved forward.

Proposed new transmission lines in the upper Midwest. Via Joint Targeted Interconnection Queue Study (JTIQ), MISO, SPP.

We might learn from the European Commission which in 2018 set a target that each member country would transmit across its borders at least 15% of the electricity produced in its territories. By the end of 2022, 23 gigawatts of cross-border connections in Europe were under construction or in advanced stages of permitting; it’s unlikely those losing profit over the changes were totally on board, but the change has gone ahead all the same.

In the US, building the line across the Midwest would cost $1.9bn, which is a staggering number – until you compare it with the cost of rebuilding aged transmission infrastructure every year.

Not only that, but interregional transmission for renewable energies also significantly reduces the cost of use for consumers. Even if renewables aren’t considered, costs would be massively reduced given that better integrated networks reduce the amount of generation capacity needed and decrease energy market cost. Reliability goes up, too.

What limited interregional connection there was proved paramount in preventing total disaster when Storm Elliott disabled power plants and pipelines from Dakota to Georgia in 2022. Imagine a reality in which localized disruption didn’t mean blackouts for entire states.

Won’t someone think of the profits?!

That isn’t how utilities companies see it, of course. For them, it means a whole bunch of drawbacks. More connections open the door for competitors who might undercut them on price; with profits in mind, having a monopoly is the more efficient choice, but interregional lines threaten utilities’ dominance over the nation’s power supply.

Also, building a whole new power plant in one area generates more money than just building transmission lines from an existing one. Transmission projects also mean competing against other developers for profit from that construction.

There’s some hope in the BIG WIRES Act introduced BIG WIRES Act introduced in September by Senator John Hickenlooper and Representative Scott Peters. The acronym, that’s so handily pertinent to the cause, stands for Building Integrated Grids With Inter-Regional Energy Supply. [Do you ever get the feeling politicians sometimes find the acronym first and work backwards? – Ed]

Hard not to see a case for nationalizing the power grid, but we won’t spell it out. Climate emergency and all, best to keep an eye on the electricity companies though, eh?

Unless you happen to know a friendly neighborhood god of thunder, you’d probably better look to your transmission lines.

The post Connectivity cuts profits for utilities corporations appeared first on TechHQ.

]]>
Apple updates iMessage to protect iPhone users from quantum attacks https://techhq.com/2024/02/fortifying-apple-imessage-defense-against-quantum-threats/ Mon, 26 Feb 2024 12:30:51 +0000 https://techhq.com/?p=232314

Apple labels PQ3 as “Level 3” security, highlighting its robust properties for iMessage. PQ3 adds a post-quantum key to Apple device registration for iMessage. PQ3 adds a rekeying mechanism for iMessage, enhancing security. The imperative for impregnable security measures has reached a crescendo in the ever-accelerating march toward quantum computing dominance. Today, as the quantum... Read more »

The post Apple updates iMessage to protect iPhone users from quantum attacks appeared first on TechHQ.

]]>
  • Apple labels PQ3 as “Level 3” security, highlighting its robust properties for iMessage.
  • PQ3 adds a post-quantum key to Apple device registration for iMessage.
  • PQ3 adds a rekeying mechanism for iMessage, enhancing security.

The imperative for impregnable security measures has reached a crescendo in the ever-accelerating march toward quantum computing dominance. Today, as the quantum supremacy specter looms, the clamor for steadfast cryptographic shields has amplified. So, in a groundbreaking move, Apple has unveiled PQ3, a cutting-edge post-quantum cryptographic protocol tailored for iMessage. Touted by the tech giant as possessing “unparalleled” security features, PQ3 represents a paradigm shift in communication security.

At the heart of Apple’s embrace of post-quantum cryptography (PQC) lies a deep understanding of the evolving threat landscape. Simply put, as quantum computing advances, traditional cryptographic methods face unprecedented challenges, making the integration of PQC imperative for safeguarding sensitive data and preserving user privacy. 

For context, with their exponential computational power, quantum computers can potentially render existing encryption algorithms obsolete, posing significant risks to data security. Recognizing this, Apple has proactively invested in research and development to pioneer cryptographic solutions capable of withstanding quantum attacks.

That’s where the latest addition to Apple’s cryptographic arsenal, the PQ3 protocol, represents a paradigm shift in communication security. By introducing a new post-quantum encryption key within the iMessage registration process, Apple ensures that data exchanged through its platform remains protected against future quantum threats. PQ3 also incorporates advanced security features, such as a rekeying mechanism within iMessage conversations, designed to mitigate the impact of critical compromises and bolster overall resilience. 

“To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world,” Apple’s Security Engineering and Architecture (SEAR) team stated in a blog post a week ago.

PQ3 for iMessage integrates post-quantum key establishment and ongoing self-healing ratchets, setting the standard for safeguarding against quantum threats. Source: Apple.

PQ3 for iMessage integrates post-quantum key establishment and ongoing self-healing ratchets. Source: Apple

A quantum leap in messaging security

Traditionally, messaging platforms rely on classical public key cryptography like RSA, elliptic curve signatures, and Diffie-Hellman key exchange for secure end-to-end encryption. These algorithms are based on complex mathematical problems deemed computationally intensive for conventional computers, even with Moore’s law in play. But the advent of quantum computing poses a new challenge.

A powerful enough quantum computer could solve these mathematical problems in novel ways, potentially jeopardizing the security of end-to-end encrypted communications. While quantum computers capable of decryption aren’t yet available (as far as we know, supervillains notwithstanding), well-funded attackers can prepare by exploiting cheaper data storage. They accumulate encrypted data now, planning to decrypt it later with future quantum technology—a tactic called “harvest now, decrypt later.”

When iMessage launched in 2011, it became the first widely available messaging app with default end-to-end encryption. Over the years, Apple has continually enhanced its security features. In 2019, the iPhone maker bolstered the cryptographic protocol by transitioning from RSA to elliptic curve cryptography (ECC) and safeguarding encryption keys within the secure enclave, increasing protection against sophisticated attacks. 

“Additionally, we implemented a periodic rekey mechanism for cryptographic self-healing in case of key compromise. These advancements underwent rigorous formal verification, ensuring the robustness of our security measures,” the blog post reads. So, the cryptographic community has been developing post-quantum cryptography (PQC) to address the threat of future quantum computers. These new public key algorithms can run on today’s classical computers without requiring quantum technology. 

Designing PQ3

Designing PQ3 involved rebuilding the iMessage cryptographic protocol to enhance end-to-end encryption, meeting specific goals:

  1. Post-quantum cryptography: PQ3 protects all communication from current and future adversaries by introducing post-quantum cryptography from the start of a conversation.
  2. Mitigating key compromises: It limits the impact of critical compromises by restricting the decryption of past and future messages with a single compromised key.
  3. Hybrid design: PQ3 combines new post-quantum algorithms with current elliptic curve algorithms, ensuring increased security without compromising protocol safety.
  4. Amortized message size: To minimize additional overhead, PQ3 spreads message size evenly, avoiding excessive burdens from added security.
  5. Formal verification: PQ3 undergoes standard verification methods to ensure robust security assurances.

According to Apple, PQ3 introduces a new post-quantum encryption key during iMessage registration, using Kyber post-quantum public keys. These keys facilitate the initial critical establishment, enabling sender devices to generate post-quantum encryption keys for the first message, even if the receiver is offline.

PQ3 also implements a periodic post-quantum rekeying mechanism within conversations to self-heal from crucial compromise and protect future messages. This mechanism creates fresh message encryption keys, preventing adversaries from computing them from past keys.

The protocol utilizes a hybrid design, combining elliptic curve cryptography with post-quantum encryption during initial critical establishment and rekeying. Rekeying involves transmitting fresh public key material in line with encrypted messages, with the frequency of rekeying balanced to preserve user experience and server infrastructure capacity.

PQ3 continues to rely on classical cryptographic algorithms for sender authentication and essential verification to thwart potential quantum computer attacks. These attacks require contemporaneous access to a quantum computer and cannot be performed retroactively. However, Apple noted that future assessments will evaluate the need for post-quantum authentication as quantum computing threats evolve.

Apple iPhone 15 series devices are displayed for sale at The Grove Apple retail store on release day in Los Angeles, California, on September 22, 2023. (Photo by Patrick T. Fallon / AFP)

Apple iPhone 15 series devices are displayed for sale at The Grove Apple retail store on release day in Los Angeles, California, on September 22, 2023. (Photo by Patrick T. Fallon / AFP)

Why PQ3 on iMessage matters for iPhone Users

Integrating PQ3 into iMessage signifies a monumental leap forward in privacy and security for iPhone users. With the exponential growth of data and the looming specter of quantum computing, traditional encryption methods face unprecedented challenges. PQ3 mitigates these risks by providing quantum-resistant protection, ensuring that your conversations remain shielded from future threats. 

In essence, PQ3’s implementation in iMessage demonstrates Apple’s interest in safeguarding user privacy and staying ahead of emerging security threats. Beyond its robust encryption capabilities, PQ3 introduces a host of additional security features designed to enhance the overall integrity of iMessage. These include secure fundamental establishment mechanisms, cryptographic self-healing protocols, and real-time threat detection capabilities. 

By incorporating these advanced security measures, Apple ensures that iMessage remains a bastion of privacy in an increasingly interconnected world.

When can iPhone users expect the update?

Support for PQ3 will begin with the public releases of iOS 17.4, iPadOS 17.4, macOS 14.4, and watchOS 10.4. Already available in developer previews and beta releases, PQ3 will automatically elevate the security of iMessage conversations between devices that support the protocol. As Apple gains operational experience with PQ3 globally, it will gradually replace the existing protocol within all sustained conversations throughout the year.

The post Apple updates iMessage to protect iPhone users from quantum attacks appeared first on TechHQ.

]]>
GoPro-equipped robot gloves teach robots new tricks https://techhq.com/2024/02/gopro-equipped-robot-gloves-teach-robots-new-tricks/ Thu, 22 Feb 2024 15:02:01 +0000 https://techhq.com/?p=232276

Visualizing the future as one where humans do less manual and repetitive work and robots do more, depends on finding an efficient way of teaching machines to perform such tasks. Ideally, the skills transfer process would generate rich data and be fast and cheap to carry out, but coming up with a method that ticks... Read more »

The post GoPro-equipped robot gloves teach robots new tricks appeared first on TechHQ.

]]>

Visualizing the future as one where humans do less manual and repetitive work and robots do more, depends on finding an efficient way of teaching machines to perform such tasks. Ideally, the skills transfer process would generate rich data and be fast and cheap to carry out, but coming up with a method that ticks all of those boxes has proven to be difficult – until now. Hitting that sweet spot appears to be a pair of GoPro-equipped robot gloves developed by researchers in the US, which – according to video footage – could provide an easy way of training robots to do all kinds of things.


What’s more, all of the universal manipulation interface know-how has been open-sourced, including the 3D printing instructions for making the handheld robot gloves. As photos reveal, the soft finger design is capable of gripping a raw egg securely without breaking the shell.

To begin the skills transfer process between human and machine, users put on a pair of robot gloves and carry out the target task multiple times to build a training dataset. Don’t be discouraged by the need for repetition, as the results can be generalized to similar scenarios – using a so-called diffusion policy that has been shown to outperform existing state-of-the-art robot learning methods – which saves time later on.

Adding to the appeal, those same results can be used by different models of robot – provided that the unit can be fitted with duplicates of the robot gloves. In the demonstrations given by the team, whose members are based at Stanford University, Columbia University, and Toyota Research Institute, robots are taught how to place an espresso cup on a saucer and even wash up dirty plates.

Key to the success of the approach is the use of GoPro cameras – one on each of the robot training gloves and one on each of the grippers in the robot-mounted setup. The cameras feature fish eye lenses to capture a wide field of view, gathering large amounts of detail from the scene, and include inertial measurement units (IMUs) – to enable pose tracking.

The team makes sure that all of the data feeds are latency-matched, which means that robots can carry out two-handed tasks correctly and perform actions such as throwing objects with high precision. Also, there’s a one-off mapping step that uses a visual code to help with simultaneous localization and mapping (SLAM).

If sufficient numbers of people join in, robots could quickly be taught to do many common industrial tasks using the open-sourced robot gloves – and that knowledge shared. Currently, robots are often taught through teleoperation, which can be a slow process. The wearable teaching grippers, on the other hand, provide a much speedier option and are more instinctive to use.

“By recording all information in a single, standardized MP4 file, UMI’s data can be easily shared over the Internet, allowing geographically distributed data collection from a large pool of nonexpert demonstrators,” writes the group in its paper – ‘Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots’ – which is free-to-read on arXiv.

Timing the robot training process, the researchers found that using their universal manipulation interface was around three times faster to use than teleoperation. Also, the learning framework was shown to be tolerant to big changes in lighting conditions, and other interference.

For example, robots trained using the gloves can continue performing their tasks even if their base is moved or humans perturb the scene in other ways – such as adding more sauce to the dirty plates.

The dishwashing task is noteworthy as it’s what’s termed an ultra-long horizon task from an automation perspective, with the success of each step dependent on the previous one. Here, the robot needs to perform seven actions in sequence – turn on the faucet, grasp the plate, pick up the sponge, wash and wipe the plate until the ketchups are removed, place the plate, place the sponge, and turn off the faucet.

Given the apparent success of the approach, regular dishwashing appliances may face some competition from two-armed robots in the future – and it get’s you thinking about other jobs that robots could do around the home.

The post GoPro-equipped robot gloves teach robots new tricks appeared first on TechHQ.

]]>
The existential crisis of the Lenovo X1 Fold https://techhq.com/2024/02/what-are-the-advantages-of-a-folding-screen-laptop/ Thu, 22 Feb 2024 09:30:32 +0000 https://techhq.com/?p=232249

The latest folding screen laptop from Lenovo. $4000+ asking price. We ask: Why? Performant, pretty, and pretty pointless. It can’t have escaped anyone’s notice that folding LCD screens are de rigeur right now. The Samsung Galaxy Z Fold 5 and Motorala Razr Plus mobile phones are on heavy advertising rotation, and there are a few laptops, tablets... Read more »

The post The existential crisis of the Lenovo X1 Fold appeared first on TechHQ.

]]>
  • The latest folding screen laptop from Lenovo.
  • $4000+ asking price. We ask: Why?
  • Performant, pretty, and pretty pointless.

It can’t have escaped anyone’s notice that folding LCD screens are de rigeur right now. The Samsung Galaxy Z Fold 5 and Motorala Razr Plus mobile phones are on heavy advertising rotation, and there are a few laptops, tablets and games consoles that also sport folding screens.

Perhaps calling such hardware laptops and tablets in their purest definitions is something of a misnomer. Ask anyone to describe a laptop, and they’ll describe a flat keyboard hinged with a display. There are variations on the theme, of course, such as displays that detach and transform into tablets like the Lenovo Yoga 9i Gen 8 and Microsoft’s Surface Pro 9, or standalone presentation screens.

The addition of a folding screen to the laptop form factor is the latest iteration on the laptop-cum-tablet theme, the Lenovo TPad X1 Fold (2nd generation), coming equipped with 16GB DDR5 RAM, a 512GB NVMe PCIE 4.0 internal drive, optional stylus and detachable keyboard.

The hardware runs on a 12th Generation Intel i7-1250U processor running at 3.5-4.7GHz, and the OLED display offers a maximum resolution of 2560×2024, rated at a bright 600 nits. Full specs of the model under review are here.

16" tablet with a kink of folding screen.

Giant tablet with folding screen.

2nd generation of this folding screen laptop

The first generation of the X1 Fold sported a 13” screen, slower processors, only 8GB RAM and weighed 0.99 kg. That latter stat is an important one: the second generation model reviewed here weighs in at 1.9kg including keyboard and stylus. This is not an ultrabook by any means, nor is it designed to be one. It’s worth noting that when folded, the measurements are 6.9 x 10.87” (176.4mm x 276.2mm) – so, a relatively small footprint, but one that’s offset by its thickness – 0.68” (17.4mm) plus a few millimeters for the keyboard, and the aforementioned 4lbs of heft.

Lenovo pictures the X1 Fold’s users working and playing on the device in the possible configurations: part-folded when in landscape configuration like a paper magazine, as large, flat standalone tablet, or using the screen in full landscape or portrait as a display with keyboard. There’s also a clamshell mode: more on this later.

Clamshell mode of the Lenovo X1 Fold Gen 2, folding screen laptop.

In clamshell mode, dividing the screen in half.

The machine comes with a kickstand that holds the screen at an angle suitable for tabletop use in either orientation, and the hardware keyboard latches on to the bottom of the stand just below the screen with a satisfying magnetic snap. The X1 Fold can also be used with a stylus, which can be attached, again via magnets, to the sides or top of the tablet display.

Note that the kickstand turns the machine into a tabletop, not a laptop: you can’t balance this beast on your lap, and trying to do so makes the keyboard detach with infallibly comic timing.

Folding screen laptop in landscape mode (Lenovo X1 Fold 2nd gen.)

In laptop mode, landscape aorientation.

The device’s motion sensors do a good job of detecting the user’s wishes, rotating and adapting according to configuration, snapping window tiles according to aspect. When no external keyboard is detected, the X1 defaults to tablet mode with an on-screen pop-up keyboard rolling in from the bottom of the display.

By default, Windows 11 is configured to run in dark mode – Lenovo states that this maximizes battery life, which is stated as being 4-6 hours of normal use, but naturally, your mileage may vary.

Lenovo, as is the case with many hardware manufacturers, ships the device with some bloatware from itself and Intel which can be largely ignored or uninstalled if required, in addition to the bloatware that’s unavoidable with Windows 11: Xbox, the mixed reality portal and the Spotify client that embeds comfortably into its autostarting niche after first run.

Folding screen laptop in portrait mode.

Screen in portrait with attached keyboard.

As a piece of hardware, the X1 Fold Gen 2 is a quick performer. There has clearly been a good deal of optimization of the interface’s responsiveness to the various sensors and peripherals like the stylus and keyboard. Connections are made quickly and there’s little of the rotate-(re-)rotate dance to have the device sense its orientation. The camera in the bezel responds to orientation in apps like Teams or Zoom. Speaker quality is better than you might expect, and the microphones gave clear and responsive results.

The keyboard that our device shipped with is exactly as you might expect, or indeed, dread. Let’s just say it’s not designed for protracted typing. Serious users will want to use the X1 Fold with something that’s less plastic-y, and for desk use, most will also opt for a secondary mouse: reaching back and forth from keyboard to screen to move the cursor/mouse pointer quickly loses its charm. The optional keyboard has the red ‘nipple’ mouse pointer, but that has few fans for obvious and understandable reasons.

The folding screen laptop’s big question

But it’s the laptop/tablet dichotomy that’s simply not solved by a folding screen. It begs the question: Why?

As a tablet, the X1 Fold is too big and certainly too heavy. The images on Lenovo’s website of models holding the device like a half-folded book are laughable. Sure, it’ll take the same shape as a large book or small-ish magazine, but has none of the advantages of either (lightness, portability, durability, disposability, lendability, longevity, finger-feel and so on). And good luck finding media that will render on the two-page layout without much touch/click-dragging around of windows.

As a laptop it’s not particularly portable, especially given that users will want a keyboard that doesn’t flex and feel like a $5 Walmart smart TV controller.

Folding screen laptop being read like a book.

That’s a 4lb, one-arm curl. Source: Lenovo.

Some may find the screen’s aspect ratio useful when unfolded, especially those who like the idea of a portrait mode, page-friendly layout for working on documents. For work in cramped environments, like Economy Class on airplanes, for example, you can fold the screen across its width into clamshell mode and either use the onscreen keyboard or sit the hardware keyboard on the lower portion of the screen. You lose half the screen’s area in both cases, obviously.

But the price premium for the TPad X1 Fold that buyers pay for the ‘feature’ of a folding screen is, in our opinion, not worth it. In the cellphone form factor, a folding screen may have arguments in its favor for reasons of portability plus decent screen real estate when unfurled. But in the tablet/laptop space, there a few advantages and plenty of downsides.

By trying to hit multiple targets, the Lenovo X1 Fold 16” feels like a fully-functioning concept prototype that should have been quietly ditched at the user-testing phase of production. Its price tag (more than $4000 as reviewed) ensures that any novelty will wear off long before the arrival of the first credit card bill after purchase.

Image of folding screen laptop for illustrative purposes.

In use mode as effective as it looks.

For the kind of money that only committed early-adopters might spend, you could buy an ultrabook for travel, a powerful desktop PC and a tablet for media consumption, and have a better experience all round. While this folding screen laptop is a powerful piece of hardware, it’s the solution to a problem no-one has.

On a final note, it’s worth remembering that Microsoft stripped out of Windows 11 the touchscreen capabilities of Windows 10, themselves a hangover from the mis-step of Windows 8. By default, the impressive screen’s native resolution is halved (200% zoom), presumably to try and make the desktop environment viable in the tablet-esque use model that Lenovo imagines its users will love. The Windows desktop is a horrible enough environment at the best of times; adding a hardware gimmick on top made for a no less miserable experience.

And now a word from our very much non-sponsors…

The post The existential crisis of the Lenovo X1 Fold appeared first on TechHQ.

]]>
Samsung seizes 2nm AI chip deal, challenging TSMC’s reign https://techhq.com/2024/02/samsung-seizes-2nm-ai-chip-deal-challenging-tsmc/ Tue, 20 Feb 2024 09:30:46 +0000 https://techhq.com/?p=232206

The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance. The deal could significantly change the power balance in the industry. Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip... Read more »

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
  • The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance.
  • The deal could significantly change the power balance in the industry.
  • Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip orders.

In the race for technological supremacy and market dominance, Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung Electronics lead the charge in semiconductor manufacturing. As demand for advanced chips surges in the 5G, AI, and IoT era, competition intensifies, driving innovation. Both companies vie to achieve smaller nanometer nodes, which are pivotal for technological advancement. 

When it comes to semiconductor innovation, TSMC spearheads the charge, with ambitious plans for 3nm and 2nm chips, promising a leap in performance and efficiency. Meanwhile, Samsung, renowned for its memory chip prowess, is mounting a determined challenge to TSMC’s supremacy. Recent reports suggest that Samsung is on the brink of unveiling its 2nm chip technology, marking a significant milestone in its bid to rival TSMC.

In a notable turn of events disclosed during Samsung’s Q4 2023 financial report, the tech world buzzed with news of Samsung’s foundry division securing a prized contract for 2nm AI chips. Amid speculation, Samsung maintained secrecy about the identity of this crucial partner.

But earlier this week, a revelation from Business Korea unveiled that the patron happens to be Japanese AI startup Preferred Networks Inc. (PFN). Since its launch in 2014, PFN has emerged as a powerhouse in AI deep learning, drawing substantial investments from industry giants like Toyota, NTT, and FANUC, a leading Japanese robotics firm.

Samsung vs TSMC

Samsung, headquartered in Suwon, South Korea, is set to unleash its cutting-edge 2nm chip processing technology to craft AI accelerators and other advanced AI chips for PFN, as confirmed by industry insiders on February 16, 2024. 

Should news of this landmark deal be legitimate, it would prove mutually advantageous. It would empower PFN with access to state-of-the-art chip innovations for a competitive edge while propelling Samsung forward in its fierce foundry market rivalry with TSMC, according to insider reports.

Ironically, PFN has had a longstanding partnership with TSMC dating back to 2016, but is opting to shift gears from here on out, going with Samsung’s 2nm node for its upcoming AI chip lineup, according to a knowledgeable insider. PFN also chose Samsung over TSMC due to Samsung’s full-service chip manufacturing capabilities, covering everything from chip design to production and advanced packaging, sources revealed.

Experts also speculate that although TSMC boasts a more extensive clientele for 2nm chips, PFN’s strategic move to Samsung hints at a potential shift in the Korean giant’s favor. This pivotal decision may pave the way for other significant clients to align with Samsung, altering the competitive landscape in the chipmaking realm.

No doubt, in the cutthroat world of contract chipmaking, TSMC reigns supreme, clinching major deals with industry giants like Apple Inc. and Qualcomm Inc. But, as the demand for top-tier chips escalates, the race for technological superiority heats up, with TSMC and Samsung at the forefront of the battle. While TSMC currently leads the pack, boasting 2nm chips for clients like Apple and Nvidia, Samsung is hot on its heels. 

“Apple is set to become TSMC’s inaugural customer for the 2nm process, positioning TSMC at the forefront of competition in the advanced process technology,” TrendForce said in its report. Meanwhile, according to Samsung’s previous roadmap, its 2nm SF2 process is set to debut in 2025. 

The Samsung Foundry Forum (SFF) plan could challenge TSMC.

Samsung’s Foundry Forum (SFF) plan.

“As stated in Samsung’s Foundry Forum (SFF) plan, Samsung will begin mass production of the 2nm process (SF2) in 2025 for mobile applications, expand to high-performance computing (HPC) applications in 2026, and further extend to the automotive sector and the expected 1.4nm process by 2027,” TrendForce noted.

Compared to the second-generation 3GAP process at 3nm, it offers a 25% improvement in power efficiency at the same frequency and complexity and a 12% performance boost at the same power consumption and complexity while reducing chip area by 5%. In short, with TSMC eyeing mass production of 2nm chips by 2025, the competition between these tech titans is set to reach new heights.

Yet, in a strategic maneuver reported by the Financial Times, Samsung is gearing up to entice customers with discounted rates for its 2nm process, a move poised to shake up the semiconductor landscape. With its sights set on Qualcomm’s flagship chip production, Samsung aims to lure clients away from TSMC by offering competitive pricing. 

This bold initiative signals Samsung’s determination to carve out a larger market share and challenge TSMC’s dominance in the semiconductor industry.

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
VMware & Broadcom axe free tier https://techhq.com/2024/02/vmware-broadcom-axe-free-tier/ Mon, 19 Feb 2024 16:21:46 +0000 https://techhq.com/?p=232181

VMware for free heads into the sunset. VMware vSphere free edition axed. VMs’ alternatives threaten company’s future. Following Broadcom’s acquisition of VMware, the company has axed its free tier that offered the ESXi hypervisor to test labs, hobbyists, and home lab builders. In a knowledge base article, the company stated that the “VMware vSphere Hypervisor... Read more »

The post VMware & Broadcom axe free tier appeared first on TechHQ.

]]>
  • VMware for free heads into the sunset.
  • VMware vSphere free edition axed.
  • VMs’ alternatives threaten company’s future.

Following Broadcom’s acquisition of VMware, the company has axed its free tier that offered the ESXi hypervisor to test labs, hobbyists, and home lab builders.

In a knowledge base article, the company stated that the “VMware vSphere Hypervisor (Free Edition) has been marked as EOGA (End of General Availability). At this time, there is not an equivalent replacement product available.” While VMware for free is no more, “not an equivalent replacement product” is a significant misnomer.

What is a hypervisor?

Hypervisors are pieces of software that control virtual machines and encapsulated instances of operating systems, allowing multiple virtual computers to be run on a single computer. The technology of virtualization is one of the ways that servers can deploy multiple applications and services using fewer pieces of hardware than dedicating a single piece of hardware to each. It’s possible because a typical combination of computer + operating system rarely utilizes all its resources at any one time. Hypervisors share resources on a host machine across all of its virtual machines, allocating processor cycles, memory, and I/O across its fleet.

VMware for free giving cause for concern: Tweet.

Source: Twitter

VMware established itself as one of the main suppliers of virtualization technology in the early 2000s, just as commonly-available hardware capabilities became such that virtualization was viable. Relatively low-cost hardware was able to run multiple instances of servers, meaning organizations and data centers could offer server facilities without necessarily needing to supply dedicated hardware for each server instance. Today, it’s possible to rent a VPS (virtual private server) for as little as the cost of a round of take-out coffees, one that’s capable of running, for instance, a web server, database, and security stack.

Broadcom’s decision to limit its offerings is driven in part by a desire to recoup some of the $61bn cost of its acquisition of VMware. Previously, VMware had been bought and sold by EMC and Dell. By ensuring that the majority of its users will pay for licenses from now on, the company guarantees itself a revenue stream. The company has also ended perpetual licenses in favor of subscriptions.

Five threats to VMware’s future

While Broadcom may be content with its market consolidation and ensuring all users pay their dues, there are five threats to VMware that will ensure its eventual long-term demise, assuming its trajectory with regard to licensing remains the same.

  1. Competing products such as KVM, Harvester, Proxmox, and Zen offer IT teams alternative hypervisors and virtualization technologies that, in monetary terms, are free at the point of use. Their widespread use means that a large body of users are able to refine, update, and extend their solutions without the restriction of either fees or the closed strictures of the proprietary VMware platform. A significant portion of those users will also upstream their improvements and bug fixes and release new capabilities of the software to all other users under less restrictive license terms.
  2. By removing the ‘free to play’ tier, VMware and Broadcom have effectively removed the up-ramp to the paid tiers. Students, hobbyists, and career-minded IT staff are less likely to consider the platform to learn virtualization technology on. Large institutions offering CS courses, for example, can choose between free-to-use or paid-for licenses for software that is at least equal in capability.
  3. Organizations operating test environments for their development and deployment activities no longer have the wherewithal to experiment to the degree they might require. Strictures on using VMware in environments that can be created and torn down (test labs) will be subject to the same fees as production environments. That places a barrier created by financial constraints on how organizations’ solutions can be tested.
  4. Advances in container-based workload deployment and management now make microservice-based applications and services viable in many contexts. Containerization is one area in which cloud computing has significant advantages for the creator. New projects are more likely to opt for containers over full virtualization if the latter seems to come with a price tag.
  5. The specter of a large company owning the rights to technology on which its customers depend brings with it several undesirable elements. Broadcom can, and might, either further raise prices, limit hypervisors’ capabilities, or ditch the project altogether. Or not. It’s the unpredictability of decisions made well away from the customer base that represents significant danger to organizations looking to bed in for the long term. Vendor lock-in may have been effective in previous decades when technological advancements were made by a single party in combination with a crushing marketing budget. Windows NT and Oracle, to take two such examples, were able to near-monopolize their chosen sectors (desktop and database, respectively). But such practices were very much of their time; in 2024, end-users have to deliberately choose their degree of lock-in and, of course, can opt for next-to-none. Existing license fee-paying customers find themselves offered two expensive alternatives: the devil is to be milked for license revenue until the end of time, and the deep blue sea is to make deep investments in moving to a different platform.
Illustrative image for article on VMware for free's ending.

“The devil and the deep blue sea” by WarmSleepy is licensed under CC BY 2.0.

The end of VMware for free: consequences

It’s a common misconception that companies are bound by law to deliver increasing value to their shareholders – an idea, albeit an incorrect one, that might explain why Broadcom and VMware have chosen the path of license-fee revenue generation over ensuring a healthy intake of new users. What both companies’ decision-makers are bound by, however, is a need to stay in a position from which a board of directors and/or a shareholder vote can remove them.

Making shareholders happy at least ensures a short-term future that promises high salaries, stock options, and performance-related bonuses. If the desire for a third Ferrari or a modest mega-yacht outweighs the desire for the long-term viability of a software offering, then it’s easy to predict which way the wind will blow.

The post VMware & Broadcom axe free tier appeared first on TechHQ.

]]>
Big bucks for… the big guys? https://techhq.com/2024/02/arm-stock-price-goes-up-thanks-to-ai-demand/ Thu, 15 Feb 2024 18:36:33 +0000 https://techhq.com/?p=232140

ARM stock prices get a huge hike thanks to AI technology demand. The chip maker’s earnings announcement last week caused the stock price to soar.  After returning to the stock market in September last year, UK chip designer ARM Holdings has seen its value almost double in less than a week. The company, based in... Read more »

The post Big bucks for… the big guys? appeared first on TechHQ.

]]>
  • ARM stock prices get a huge hike thanks to AI technology demand.
  • The chip maker’s earnings announcement last week caused the stock price to soar. 

After returning to the stock market in September last year, UK chip designer ARM Holdings has seen its value almost double in less than a week. The company, based in Cambridge, reported financial results last Wednesday, showing that AI-technology demand is boosting its sales.

This isn’t exactly a tale of the little guy making it, given that chips designed by ARM already power almost every smartphone in the world. Since the earnings announcement last week, shares have soared and are now up by more than 98%.

Nvidia, another big name in the chip sector, has actually seen its shares more than triple in value over the last year. Demand for AI chips is responsible, the boom having helped Nvidia become one of the most valuable publicly-traded companies in the world.

Its stock value is a jaw dropping $1.8 trillion, making it the fifth US company to join the “trillion-dollar club” alongside other technology giants Apple, Microsoft, Alphabet and Amazon.

What’s slightly different for ARM is that its technology isn’t used directly in AI work. Instead, other chip makers including Nvidia are choosing to use it for central processing units (CPUs) that work well with AI-specific chips.

Taiwan Semiconductor Manufacturing Company (TSMC) also uses ARM’s chips. Combine these two major customers with the rest of the consumer-focused companies that buy from ARM, and you’ve got huge revenue potential.

What’s more, self-driving technology means demand for ARM-designed chips is growing in the car making industry.

All this is a bit of a redemption arc for the company. ARM was founded in 1990 by chip makers in Cambridge and bought by SoftBank some 25 years later in 2016 for $32bn. Four years later, plans were announced to sell ARM to Nvidia.

Then, come April 2022, the deal was shelved by SoftBank after regulators around the world objected. Instead, it said it would sell shares in ARM on the New York Nasdaq stock exchange.

The rise in share value, then, is good news for SoftBank, proving the wisdom of its decisions – particularly since it’s been hit by losses due to the dropping value of other investments like WeWork, the office space firm.

SoftBank holds a roughly 90% stake in ARM and has seen its own shares grow almost 30% in the last week.

That failed plan to sell to Nvidia is also seeing some recuperation as it disclosed investments in ARM with a stake in the company now that’s worth $147.3m.

So, listen up all you AI naysayers. If there’s one thing that the technology is doing for humanity, it’s making huge sums of money for the corporations that experienced some minor losses a few years ago. And that’s worth something.

The post Big bucks for… the big guys? appeared first on TechHQ.

]]>
Interconnected supply chains, interconnected technology solutions https://techhq.com/2024/02/interconnected-supply-chains-interconnected-technology-solutions/ Thu, 15 Feb 2024 15:17:51 +0000 https://techhq.com/?p=232149

Multiple outside factors affect any company with significant investment in their supply chain and logistics operations, even before internal practices are considered. Extra red tape at Britain’s borders, continuing US port congestion and trade disruptions in the Red Sea are testament to that. A supply chain comprises many actors, so it’s almost impossible to shield... Read more »

The post Interconnected supply chains, interconnected technology solutions appeared first on TechHQ.

]]>

Multiple outside factors affect any company with significant investment in their supply chain and logistics operations, even before internal practices are considered. Extra red tape at Britain’s borders, continuing US port congestion and trade disruptions in the Red Sea are testament to that. A supply chain comprises many actors, so it’s almost impossible to shield the effects of even a small disruption from an individual business.

Supply chain

Source: Zebra Technologies

Supply chain, warehousing, distribution, and logistics functions, therefore, tend to be run by long-term planners. By proactively monitoring and improving aspects of operations within their control, the company buys itself as much protection as it can against local or world events that could occur tomorrow, or many years in the future.

There are multiple ways in which problems closer to home can be addressed and methods combating them attenuated. Staff ‘turnover’ in warehouses, distribution depots, and logistics, for example, brings their own considerable costs, exacerbated by chronic labor shortages in the sector. Employee well-being and job satisfaction – the employee experience (EX) – are high on the list of desired outcomes among decision-makers, as are worker productivity and automation. In some cases, EX, productivity, and robotized processes are interconnected; staff with dull, repetitive jobs can be re-tasked to more interesting and fulfilling roles once even basic technology and automated systems are in place.

Better operational practices not only reduce costs in the mid to long term but also help ensure that the company itself causes a minimum of supply chain disruption both up and downstream. This, in turn, lowers prices on the table from supply chain partners and creates greater trust among third parties with whom the company works.

Supply chain

Source: Zebra Technologies

Those with significant resources set the bar high for others to follow. The practices of Amazon, Temu, and state-owned GEODIS, for example, have been able to instigate top-down approaches to operational overhauls, with some becoming household name brands by dint of their efforts. Yet there is no direct correlation between the resources plowed into restructuring operations and 100 percent positive outcomes. Discrete projects can bring the types of resilience and efficiency to more modest transport, logistics, and distribution companies that create significant cost reductions. That’s partly down to the knock-on positive effects that flow from relatively minor changes.

Light at the tunnel’s end

Founded in 1981, UK retailer The Works decided to give autonomy to its individual shops and online store, supplying employees with the technology to monitor and run their own stock control processes. Data gathered fed back to automated systems in the company’s supply chain, slashing surpluses and ensuring stock availability, where and when it was needed.

Simple-to-use devices with familiar user interfaces ensured employee buy-in and the complexity of the data collated country-wide was handled by automated software. This instigated more efficient logistics and distribution, plus informed warehouse operations, creating cost savings and increased customer satisfaction at the point of service.

The data ingestion and business-led optimization of operations are particularly effective in supply chains, where the consequences of even small changes tend to ripple out into other parts of the business. That’s led larger organizations to invest in expensive re-architecting of enterprise-level ERP systems.

Supply chain

Source: Zebra Technologies

But specialist software designed and deployed solely in supply chain, T&L, distribution, and warehousing businesses have particularly attractive offerings. From users equipped with environmentally-tuned devices* to controlling software and analytics, industry-specific vendors are in an excellent position to help customers refine their operations across the board.

* (Cold room-suitable tablets, forklift-mounted tech, long-distance barcode scanners, etc.)

While no single solution is a sure fit for every organization, a sector specialist will already have many of the answers to problems that are sadly not unique to any one business. With the addition of consultation, advice, partnership, and long-term device and software support, companies with significant supply chains have much to gain from sector-specific vendors.

Conclusions

Many of the issues this article touches on are further explored in the Zebra Warehousing Vision Study, which examines many of the pain points of operations in the sector and suggests solutions for automation and technology deployment that make business sense.

Any transformational journey takes time and study to achieve, but some systems are particular to this complex vertical. Getting guidance as part of a longer-term partnership is also highly recommended. What might seem an intractable problem in your organization’s operations may well have, if not immediate solutions, but a range of options. Sometimes, it takes a specialist to bring them to the surface.

To refine your operational strategies to help develop immunity from the effects of global supply chain issues, improve EX, and cut operating costs in some surprising areas, read the Vision Study via this link.

To talk to an expert in the industry about how technology, hardware, and automation can help your business scale, reach out to a representative.

The post Interconnected supply chains, interconnected technology solutions appeared first on TechHQ.

]]>