Insights - TechHQ Technology and business Thu, 07 Mar 2024 12:02:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Tiny VLMs bring AI text plus image vision to the edge https://techhq.com/2024/03/tiny-vlms-bring-ai-text-plus-image-vision-to-the-edge/ Thu, 07 Mar 2024 12:02:04 +0000 https://techhq.com/?p=232534

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing... Read more »

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing field and help users to understand their world. And a great example of this is the rise of vision language models (VLMs), which co-locate image features alongside multi-dimensional word embeddings.

So-called multimodal AI – neural networks capable of generating results based on multiple input types such as text and images – adds to the number of ways that deep learning can be used. OpenAI’s GPT-4 with Vision will answer questions about images – for example, if the user submits a photo or web link. Given a picture of the contents of a fridge, it can suggest ideas for dinner.

You can picture VLMs as image recognition on steroids. Multimodel AI models can do much more than just recognize features in a scene. Algorithms can gather insights from security footage, prompt maintenance crews to take action, analyse customer behaviour, and much more besides (as our test results show – scroll down to see the responses).

VLMs can provide a second pair of eyes that never get tired. However, these superpowers come at a cost and historically have required heavy lifting to happen behind the scenes in the cloud. Generative AI algorithms have gorged themselves on the contents of the internet in the name of fine-tuning their model weights.

But not all of those parameters are created equal, and developers are finding ways to trim AI models down to a more memory-efficient size for specific tasks. It means that algorithms can run locally at the edge and suit fully remote use cases such as drone inspections – for example, where connectivity cannot be guaranteed.

One of a new breed of tiny VLMs capable of running at the edge is Moondream, which is advertised as being able to run anywhere, even on mobile devices and Raspberry Pi’s. The code to run the tiny AI text and vision model is available on GitHub, and there’s even a live demo environment so that you can jump straight in and put the VLM to the test.

Tiny VLM test results

To discover what VLMs can do and highlight potential use cases for local AI models capable of answering questions about images, TechHQ submitted a variety of photos to the Moondream2 live demo hosted on Huggin Face.

Image 1

Harbour scene for testing AI vision language model

Harbour scene. Photo: JT.

Input: Is it safe to walk straight ahead?

Response: No, it is not safe to walk straight ahead. There is a dock with a red life ring attached to it, and a few boats are visible in the water. You need to be cautious and look out for any obstacles or hazards such as the dock, boats, or other watercraft, before walking forward.

Image 2

Stairway for testing AI vision language model

Stairway. Photo: JT.

Input: Is there something for me to hold on to?

Response: Yes, there is a railing on the stairs, and you can hold on to it while climbing to the top. The railing is attached to the wall, providing support for those ascending or descending the steps.

Input: Can you see an open window?

Response: Yes, there is an open window on the building in the background, and the sun is shining brightly illuminating the scene.

Image 3

Pedestrian crossing for testing AI vision language model

Pedestrian crossing. Photo: JT.

Input: Is it safe for me to use the pedestrian crossing?

Response: No, it is not safe for you to use the pedestrian crossing, as the image shows a black car approaching the crossing. The car is likely not paying attention to the red light, which indicates that it’s not safe for pedestrians to cross at that moment.

Image 4

Road signs for testing AI vision language model

Road signs. Photo: JT.

Input: Should I go left or right to get to the airport?

Response: Right.

Input: Which way to the marina?

Response: Left.

Image 5

Bus stop by the side of the road for testing AI vision language model

Bus stop. Photo: JT.

Input: Is there anywhere to catch a bus?

Response: Yes, there is a bus stop in the image, with benches for passengers to sit on and a covered area. A traffic cone is also present, and a bridge can be seen in the background.

Input: Where can I shelter if it rains?

Response: You can shelter at the bus stop if it rains, as there is a bus shelter nearby with benches and a roof providing protection from the elements.

Image 6

Image of a luxury yacht for testing AI vision language model

Luxury yacht. Photo: JT.

Input: What can you see?

Response: A helicopter is positioned on a ship, with a metal fence in the foreground. The background features a variety of elements, including buildings, trees, a road, a water body, and the sky.

Image 7

Photo of New Cut Coffee shop in Bristol, UK, for testing AI vision language model.

Coffee shop. Photo: JT.

Input: Is there anywhere to get a warm drink around here?

Response: Yes, there is a coffee shop nearby. The image shows a red building with the words “New Cut Coffee” on it, and a black fence. A black car is parked in the lot, and a white building is visible in the background. The sky is blue overhead.

Input: Does the coffee shop sell sandwiches?

Response: Yes.

Input: What else is there to eat at the coffee shop?

Response: In addition to coffee, the New Cut Coffee shop offers sandwiches and pastries. The image also features a building and a vehicle in the background.

Image 8

Web cam image for testing AI vision language model.

Web cam mode: the tiny VLM live demo can accept images via a connected webcam if browser permissions are given. Screengrab: JT.

Input: Can you see any ethernet ports?

Response: Yes, there are two visible ports on the device that the man is holding.

Verdict on tiny VLM testing

It’s hard not to be impressed with the performance of Moondream2 – a VLM that’s initialized with weights from SigLIP and Phi 1.5, and then trained using synthetic data generated by Mixtral.

Evaluating the responses generated from TechHQ’s test sequence of images and text inputs, it’s clear that Moondream2 gets more right than it gets wrong, and is capable of noticing fine details that weary human eyes may miss. Also, as mentioned, the AI model’s small footprint is another winning feature – paving the way for inference to take place on edge devices that could include drones or robots.

VLMs could turn out to be incredibly useful in helping robots orientate themselves in the world and being able to run algorithms locally eliminates the need for network access, or at least provides redundancy in the case of outages.

A smart idea for smart glasses

Alongside its launch of the Quest 3 VR headset, Meta also announced a collaboration with Ray-Ban to produce an updated line of smart glasses. The product, which features Qualcomm’s weight-distributed Snapdragon AR platform, has some local voice control capabilities – for example, to instruct the spectacles to take images using the built-in ultra-wide 12MP camera. Users simply have to say, “Hey Meta, take a picture.”

Being able to couple those photos with a VLM running locally would turn the Ray-Ban smart glasses into a much more compelling product, and could bring scenes to life for visually impaired wearers without the need for pairing with a smartphone.

Vision assistance powered by edge-compatible VLMs could dramatically enhance the capabilities of digital camera-equipped devices. As the webcam image in our test sequence highlights, there’s the potential for algorithms to help with maintenance and repair tasks, distributing knowledge and expertise across the globe.

AI is doing for knowledge work what robotics has done for manufacturing, and it’s just the beginning.

 

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>
Corn cobs have night vision prospects https://techhq.com/2024/03/corn-cobs-have-night-vision-prospects/ Tue, 05 Mar 2024 16:50:39 +0000 https://techhq.com/?p=232520

It could be corn cobs rather than carrots that have the best prospects – thanks to the abundance of lignin and cellulose found in the crop – to improve night vision. Researchers in China have used the materials to produce near-infrared (NIR) transparent optical filters for advanced imaging applications that highlight nature’s growing appeal to... Read more »

The post Corn cobs have night vision prospects appeared first on TechHQ.

]]>

It could be corn cobs rather than carrots that have the best prospects – thanks to the abundance of lignin and cellulose found in the crop – to improve night vision. Researchers in China have used the materials to produce near-infrared (NIR) transparent optical filters for advanced imaging applications that highlight nature’s growing appeal to device makers.

“Lignocellulose-based bio-sourced materials are abundant, renewable, nontoxic, and mechanically strong candidates for optical materials,” explains the team in a paper published recently in Research – a Science Partner Journal.

NIR filters are crucial to the success of night vision cameras, as we shall soon discuss, and designs are typically made using two approaches. One option requires expensive glass materials that – while effective – can involve complicated fabrication steps. Alternatively, developers can use ultraviolet-visible absorbing dyes, which are easier to work with.

However, neither option has the environmental or economic appeal of using waste corn cobs. Plus, the performance – based on prototypes tested by the group – could be superior to conventional night vision components.

Why night vision cameras use NIR filters

To be effective, night vision cameras need to maximize the available light, which – once the sun has gone down, or is blocked indoors – is to be found in the NIR portion of the electromagnetic spectrum. NIR filters isolate this usable illumination band, while preventing interference from other wavelengths.

Without an NIR filter, night vision cameras would be easy to blind using a torch or a smartphone flash. Imaging systems would be similarly affected by other artificial light sources such as vehicle headlamps.

Regular digital cameras have an NIR blocking filter that stops photos and video from appearing washed out and makes sure that images match the optical response of the human eye.

As an aside, removing the NIR blocking filter from a digital camera can allow users to peer inside electronic devices that otherwise appear to be opaque – for example, if the case material is transparent to NIR light. Vein viewer technology uses near-infrared light to visualize major blood vessels beneath the skin, which also exploits the sensitivity of commercial imaging sensors outside the visible spectrum.

Eye-tracking in the near-infrared

Night vision style cameras are particularly well-suited to eye-tracking, as the NIR images provide strong contrast for algorithms to respond to. And there are a number of applications that exploit this opportunity. For example, several chip designers such as Qualcomm and Analog Devices have offerings that focus on driver and occupant monitoring for automotive applications.

Eye-tracking solutions, which operate at NIR wavelengths, can determine where the driver is looking at any moment in time to ensure that attention is being paid to the road ahead. Systems can also spot if the driver appears sleepy or is using a cell phone while the vehicle is in motion, and issue a safety warning.

Also, eye-tracking persists even if occupants are wearing sunglasses, as regular lenses are designed to block harmful ultraviolet rays and are transparent to NIR light. However, it’s possible to purchase privacy-focused spectacles, such as products sold by Reflectacles, which are fitted with an IR blocker.

In this case, the wearer’s eyes will remain obscured to 3D infrared facial mapping software and 2D facial recognition systems that use infrared light as their illumination source.

Returning to the researchers’ corn cob-derived NIR filter, the combination of cellulose and lignin appears to produce a high-performance and practical film.

“The captured lignin was fused to fill the gaps in a cellulose network, which then held the fibers tightly and created a homogeneous dense structure,” comments the group. “Both the lignin and the dense structure provided the biofilter with unique optical properties, including strong UV-vis light blocking (~100% at 400 nm and 57.58% to 98.59% at 550 nm), low haze (close to 0%), and high NIR transmittance (~90%).”

The post Corn cobs have night vision prospects appeared first on TechHQ.

]]>
Electronics recycling – cheese waste has a taste for gold https://techhq.com/2024/03/electronics-recycling-cheese-waste-has-a-taste-for-gold/ Mon, 04 Mar 2024 16:34:09 +0000 https://techhq.com/?p=232503

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold. The... Read more »

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold.

The business of extracting gold and other valuable materials from electronic junk is growing and is even becoming a popular side hustle. One tonne of electronic circuit boards can yield in the region of 0.150 kg of gold, and over double that in silver. Plus, there’s likely to be anywhere from 250 – 300 kg of copper up for grabs per tonne of e-waste.

For years, device users have been throwing away – collectively – billions of dollars in precious metals as they dispose of unwanted electronics.

In the beginning, e-waste was sent overseas to become somebody else problem. But processing e-waste has the potential to be many times more lucrative (and much less polluting) than trying to extract gold and other precious metals from ore mined from the ground.

The ability for environmental clean-up operations to turn a profit is seeing a wave of new e-waste recycling solutions enter the market. And for those can run their operations at scale, there’s money to be made in turning e-waste into gold.

One of the most ingenious approaches – which is still at an early stage, but generating commercially promising results – uses spongey nanofibrils created from a by-product of cheese-making to soak up gold ions in solution and turn them into flakes.

Demonstrating the potential of their approach, researchers at ETH Zurich in Switzerland used their cheese waste creation to obtain a 450 mg gold nugget from 20 junk motherboards. According to the team, the material was 90.8 percent pure (21-22 carats), which values the reclaimed gold at around USD 28 – based on today’s scrap gold price.

What’s more, the group claims that the cost of the source materials and energy costs for the process represents just 1/50th of the value of the gold extracted from the e-waste.

Googling ‘how to turn e-waste into gold’ produces plenty of search hits, but many of the recipes feature toxic chemicals. However, by employing a bio-derived ionic sponge, the ETH Zurich researchers believe that they’ve found a gentler path to converting unwanted electronics into valuable materials. And they are not the only ones pursuing more environmentally friendly e-waste processing.

Mint Innovation, whose vision is to become the world’s leading provider of circular green metals, opened a commercial-scale facility in Sydney, Australia, in 2022. According to reports, the operation can salvage USD 85,000 in gold per day from recycled electronics – as well as being able to recover copper and other valuable metals.

Cleverly, Mint’s process –which was developed in New Zealand – makes use of bacteria and fungi that have evolved in regions rich in mine works and abandoned machinery. The organic soup is capable of absorbing metals and Mint exploits those properties to process e-waste in a more environmentally friendly way, compared with conventional methods.

According to Mint, everything leaving its plant is benign, which means that there are no chemical waste streams to be dealt with. And there’s more good news as the process is applicable to other waste stream such as used batteries and catalysts.

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>
Inkitt: what happens when AI eats its own words? https://techhq.com/2024/03/ai-will-help-writers-create-literally-average-stories/ Mon, 04 Mar 2024 09:30:39 +0000 https://techhq.com/?p=232469

Inkitt AI help for writers shows successful patterns. Success delivered by what are proven to be winning formulae. We look forward to Fast & Furious 52‘s release in 2066. The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that... Read more »

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
  • Inkitt AI help for writers shows successful patterns.
  • Success delivered by what are proven to be winning formulae.
  • We look forward to Fast & Furious 52‘s release in 2066.

The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that work on behalf of its authors. The AI will guide submissions to the eponymous app in areas such as readability, plot, and characterization.

Self-publishing is hugely popular among authors. It circumvents the often-frustrating processes of finding an agent, receiving rejections from established publishing houses, and lessening any income from a work thanks to parties in the chain who each take a cut of revenues generated by sales. An AI-powered virtual assistant can help authors with advice and offer changes to a text that are drawn from previously successful stories.

Inkitt’s AI amalgamates the output from several large language models to find trends in the enormous body of previously published books, giving writers help to align their work with already successful and popular works. At first sight, its approach is clearly more appropriate than having ‘authors’ simply use an AI to create words for a book. It’s also a step above once-respected news outlets using AI to write stories. But a deeper understanding of how large language models work informs us that the boundaries of creativity possible with AI are claustrophobic.

AI help for writers illustration

“Cuba book” by @Doug88888 is licensed under CC BY-NC-SA 2.0.

Whether in video, visual art, game design, or text, machine learning algorithms are educated on extant publications. Over the period of the learning phase, they process large quantities of data, and learn patterns that can then be used to reproduce material similar to that in the body of learning data.

In the case of a novel or screenplay’s structure, then, what’s succeeded in the past (in terms of popularity and, often, revenue generated) can be teased out from the also-rans. It’s a process that is as old as creativity itself, albeit a habit that’s formed without digital algorithmic help. Hollywood industry experts can produce lists of formulae that work for the plot, the rhythm of narrative flow, characterization, and so on. Such lists, whether ephemeral or real, inform the commissioning and acceptance of new works that will have the best chance to succeed.

The threat to creativity from the models used in ways like that proposed by Inkitt is twofold. The most obvious is one of the repetition of successful formulae. This means, depending on your choice of language, works that are on-trend, derivative, zeitgeisty, or repetitious.

The second threat comes from the probability curves embedded into the AI code. The degree of exception from the average of any creative work chewed up by an algorithm will always be diminished. What can’t be judged particularly easily is what makes something an exception and whether it’s different from the average because it’s badly created or because it’s superbly created. Truly fantastic creative works may be given a lesser weight because they don’t conform to a number of other factors, like sentence length or a color palette that is (currently) familiar.

The effect is one of standardization and averaging across the gamut of creative output so that a product is successfully conformist to the mean. Standardization equals conforming, which equals success. But standardization leads inexorably to stagnation.

In practical uses of AI today, many of the traits and methods of models are perfect for their designed purpose. Data analytics of spending patterns informs vendors’ choices for new product development based on what sells well. Outliers and exceptions have little importance and are rightly ignored by the model’s probability curve.

But in areas of creating new art, product design, music composition, or text creation, the exceptions can have value, a value that is increased by not conforming to average patterns of success, readability, aesthetic attractiveness, characterization, or one of a thousand other variables at play. If conformity to guidelines means success, then how we define success is the interesting question. History is littered with composers, artists and writers who didn’t conform, and were succesful during their lifetimes or posthumously. Plenty too who were succesful conformists. And many who kicked against prevailing strictures and got nowhere, dying in poverty.

Will AI be able to give help to writers?

“book” by VV Nincic is licensed under CC BY 2.0.

So what help can AI actually deliver for writers? As in many areas of life and business, it can work well as a tool, but it cannot – or at least should not – be allowed to dictate the creative elements of art.

By reducing creativity to an algorithmically generated idea of “what works,” talent that’s non-conforming is immediately stymied. It depends, of course, on what the creator’s desired outcome is, or how they deem themselves to be succeful. If they want a greater chance of achieving mainstream popularity, then the Inkitt AI will help guide them in what to change to better fit into the milieu. Many dream of being the scriptwriter or 3D visual designer for the next movie blockbuster, and there is value in that. Inkitt may make people better writers, but it’s the individual’s idea of what a ‘better’ writer is that will inform their decision whether or not to sign up.

Individual human voices can make great creative works. But by placing those works inside a mass of mediocrity (and worse) and teaching an algorithm to imitate the mean, what’s produced is only ever, at best, going to be slightly better than average. As more content is created by AI and it too becomes part of the learning corpora of machine learning algorithms, AIs will become self-replicating, but not in the manner of dystopian sci-fi. Much of the future’s published content will just be very, very dull.

Oatmeal for breakfast, lunch, and dinner.

Amalgamating for the sellable mean turns tears of human creativity into nothing more than raindrops in the flood.

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
Leave your X behind, head for Bluesky https://techhq.com/2024/03/bluesky-vs-mastodon-which-to-use-after-x/ Fri, 01 Mar 2024 09:30:23 +0000 https://techhq.com/?p=232433

The Bluesky vs. Mastodon question is on every ex-Xer’s mind. Which social media alternative is best for you? We weigh up Bluesky vs. Mastodon so you don’t have to. The Twitter-becomes-X debacle has been discussed weekly since a certain someone bought the social media company and turned it into the handcart in which we’re all... Read more »

The post Leave your X behind, head for Bluesky appeared first on TechHQ.

]]>
  • The Bluesky vs. Mastodon question is on every ex-Xer’s mind.
  • Which social media alternative is best for you?
  • We weigh up Bluesky vs. Mastodon so you don’t have to.

The Twitter-becomes-X debacle has been discussed weekly since a certain someone bought the social media company and turned it into the handcart in which we’re all going to Hell. We’re tired of talking about it, frankly. Although the mass migration from the site that was threatened by many users never quite came to pass, a new dawn broke for alternative social media sites.

Meta-owned Instagram tried to go toe-to-toe with X by launching its own microblogging platform, Threads, but it was outliers Mastodon and Bluesky that took in those users who couldn’t bear to keep using what used to be Twitter.

With images from ITSFOSS‘s review, let’s take a look at the sites and see what users are saying.

Mastodon was launched in 2016 by Eugen Rochko as a decentralized microblogging platform that was really quite hit-and-miss. It spent a while trying to replicate Twitter too closely, eventually developing into a social medium operating on distributed servers, called instances, allowing for independent networks with different themes and topics.

Bluesky vs. Mastodon - the Mastodon challenger.

Bluesky, on the other hand, began as a team funded by Twitter to build an open and decentralized standard for social media, in 2019. Twitter co-founder Jack Dorsey was on the team, hoping the platform would be the first to utilize the protocol.

When you-know-who took over, the plan for an open standard Twitter (and the name itself) disappeared. Since 2021, Bluesky has been an independent company backed by Dorsey ever since, and open sourced the client code for the platform, with an invite-only system for the website. Now, it’s opened up to everyone.

Bluesky vs Mastodon - Jack Dorsey is #TeamBluesky

Jack Dorsey left Twitter and is now on Bluesky’s board.

In terms of user experience, Mastodon has the benefit of being around for longer. The site has had time to evolve with community requirements. The interface is relatively similar to Twitter, with all the right differences to make the move feel worth it.

There’s no algorithm dictating what you see in the public feed (which some of us might miss, despite its foibles) and posts with more organic engagement will likely show up first. Otherwise, the feed shows posts by users you follow chronologically.

Experience does vary somewhat based on the instance you signed up to, because that impacts what you see and what server rules are complied with. This is pretty adjustable to your taste, though.

The number of instances to choose from is often lauded as a big plus for Mastodon. In fact, many reviews and user guides focus on all the benefits of the system but doesn’t elaborate on how best to make use of it.

That means it’s difficult to actually find the community that’s supposedly on offer – not many people want to put that much effort into building their social media profile. We’ve all gotten used to a handy algorithm to do it for us.

Bluesky vs. Mastodon - Bluesky, like Classic Twitter?

Glance too quickly and you’d think you were on the Twitter of old.

Another double-edged sword is the human moderators. In almost every way, servers moderated by humans are better for users, particularly when you can essentially choose servers moderated by individuals whose values align with your own.

Some have critiqued the system because, like the forum days of old, the people in charge can tend towards being power-crazed leaders with too much time on their hands. Not that that’s a Mastodon problem: it’s just people.

Bluesky, on the other hand, uses automated moderation. In many ways, it closely mimics Twitter in its heyday (unsurprising, really, given the Dorsey involvement) and you’d be forgiven for forgetting you’d switched to a new platform.

You get the option to choose an algorithm to follow per your requirements; limit your feed to just people you follow, or a custom one that learns what you like (a ‘for you’ feed tailored to your needs, baby!).

Although it’s separated from Twitter, Bluesky is still a corporate-backed product. It’s open source, and may soon let you host your own instance, Mastodon-style, but for now it’s a board-controlled entity.

Even if it completes its federation network and lets you self-host instances, Bluesky as a primary platform will use advertising and other monetization strategies.

Another one-up for Mastodon, perhaps, is its non-profit status and crowdfunded development. It’s not reliant on investors (like Twitter was…) and companies and users donate to Mastodon, keeping it free from monetization.

When it comes to privacy, both Mastodon and Bluesky get points for not requiring a phone number for sign up. Beyond that, Mastodon seems to come out ahead.

It allows users to push public posts or post only for followers and interact with individual followers using private mentions (a bit like direct messaging, which isn’t technically an option). You can also request an archive of your data and export it in an Activity Pub compatible format, and easily move your data to another instance or (within some limits) even move your account data to another handle.

Mastodon’s privacy policy is simple and easy to understand. Bluesky vs. MAstodon. Mastodon data exporting.

You can’t make your profile private in the same way you can on Instagram or X, but there is a setting that automatically deletes posts, and you can set up your account so you manually approve followers.

Bluedsky does offer private accounts but doesn’t use two-factor authentication – which Mastodon does. Plus, if your Bluesky account is private, your posts are only hidden from view for users who haven’t logged in to the site. The data is public to any other server connected to the network.

Data export is in the beta phase, so getting it all downloaded is a mammoth task. Bluesky’s privacy policy does clarify it might use your personal information for marketing and research and share it with third party services. FOSS doesn’t think it’s a good privacy policy for web service in 2024.

The thing is, both Bluesky and Mastodon come out remarkably well against the platform they might replace – and the major social media that’s owned and run by tech giants. Depending on what’s spurred you to seek an X alternative, the two platforms offer a well-rounded selection.

Mastodon, though perhaps on the moral high ground, definitely requires more effort to set up and establish a presence – or at least a timeline that keeps you scrolling in the same way algorithm-run feeds do. Bluesky offers pretty much what was lost in the switch to X, perhaps without the years of user-feedback-influenced improvements.

Early reviews of Bluesky, back in invite-only beta, all celebrated its comparative positivity, harking back to ‘simpler times’ online. Whether it’s remained quite so fluffy and nice is questionable – as is how strong a marketing factor that will be.

Increasingly, you’ll notice that if you scroll to the bottom of a company’s website there’ll be a link to a Bluesky or Mastodon account – maybe even both. While the more mainstream services are still going, there is a sense that they’ve exhausted their peak and a new era of social media feels closer to realization than it has for years – maybe even since the beginning of Instagram.

So, give one of the alternatives a go, even if only to say you were there when it all started.

Tired of the Twitter swamp? Be more Groot…

The post Leave your X behind, head for Bluesky appeared first on TechHQ.

]]>
Rapid prototyping dream team – VR and 3D printing https://techhq.com/2024/02/rapid-prototyping-dream-team-vr-and-3d-printing/ Thu, 29 Feb 2024 16:10:50 +0000 https://techhq.com/?p=232422

One of the biggest misconceptions about 3D printing – the notion that it doesn’t suit the production of goods in high volume – has been put to bed by brands such as Adidas, which has turned heads in the footwear industry with its striking midsole designs. Virtual reality (VR) is similarly misunderstood and sometimes dismissed... Read more »

The post Rapid prototyping dream team – VR and 3D printing appeared first on TechHQ.

]]>

One of the biggest misconceptions about 3D printing – the notion that it doesn’t suit the production of goods in high volume – has been put to bed by brands such as Adidas, which has turned heads in the footwear industry with its striking midsole designs. Virtual reality (VR) is similarly misunderstood and sometimes dismissed as a gimmick. But put VR and 3D printing together, and you have a rapid prototyping dream team that’ll be the envy of your competitors.

And don’t think that you need to spend north of US $3000 on an Apple Vision Pro to reap the rewards. The benefits of combining VR and 3D printing can be realized with an affordable headset such as the Quest 3 – released in 2023 – or even a Quest 2, which is now available at a lower price point and with a software speed up.

Iterate faster

A big fan of having designers use VR to iterate designs in 3D is Jared Goldman – VP of Design at New Balance (another footwear company at the bleeding edge of manufacturing). And there are some compelling reasons to back up his thinking.

By Goldman’s estimation, physical samples have a turnaround time of around 45 days – once you’ve added up sending the tech package, production of the prototypes in Asia, and shipping of the shoes back to the US. However, it’s possible to produce photoreal samples of the same designs in seven days, or maybe less, in a virtual environment.

Decision makers can see the shoes from all angles in a virtual environment and have confidence in their feedback that compares with handling physical samples. Plus, headset-wearing team members can collaborate easily and in real-time, no matter where they are across the globe – provided that they have an internet connection.

“The better you can express your idea, so that somebody who is a non-designer can understand it, the more successful you’re going to be,” Goldman comments in a case study shared by Gravity Sketch – a developer of VR design software that is increasingly becoming part of 3D printing product workflows.

Example of a VR and 3D printing toolchain

  • Gravity Sketch – for virtual product creation
  • Blender – to add materials appearance for final design validation
  • Ultimaker Cura – for slicing and 3D printing production preview
  • 3D printers – to produce the finished goods

Today, there are numerous online tutorials showing how to combine VR and 3D printing to go from first idea to finished product. Beginning with a rough sketch created using handheld VR controllers, the next step is to add virtual form to the digital design, with lighting effects making the output appear all the more realistic.


Slicing software will prepare the model for 3D printing, where it’s possible to adjust support material and preview the build to double-check that all’s well before committing fabrication time and materials to the job.

And it’s not just about keeping sneakerheads happy with a stream of new and exclusive trainers, VR and 3D printing can be combined to deliver progress in areas such as medical visualization too.

Materialise – a pioneer in the field of 3D printing – points out that the number of hospital-based 3D printing facilities has increased exponentially to help with tasks such as patient-specific surgical planning.

On TechHQ, we’ve written about how beneficial VR can be to medical training and upskilling hospital staff. And 3D-printed anatomical models take that tactile experience to the next level.

Construction is another area that’s taking a keen interest in additive manufacturing – this time using giant 3D printers to build homes layer by layer. Here, VR can be used by designers and home buyers to experience what it’s like to move around a digital render of the building prior before nozzles start squirting out cement.

“We see this tool as an exciting way the get clients, designers, and contractors up to speed on the inner workings of how a 3D construction printing project actually works,” commented Marchant Van Den Heever, CTO of HTL.tech – a distributor of 3D construction printing technology in the UK and Ireland.

The post Rapid prototyping dream team – VR and 3D printing appeared first on TechHQ.

]]>
3D printed shoes – footwear signals future manufacturing trend https://techhq.com/2024/02/3d-printed-shoes-footwear-signals-future-manufacturing-trend/ Wed, 28 Feb 2024 16:55:03 +0000 https://techhq.com/?p=232393

3D printed shoes are a great example of where additive manufacturing absolutely lives up to the hype. And following the progress made by sneaker designers such as Adidas, which has built up considerable experience in using 3D printers to make trainers, reveals much about how future products could be manufactured. In the early days, 3D... Read more »

The post 3D printed shoes – footwear signals future manufacturing trend appeared first on TechHQ.

]]>

3D printed shoes are a great example of where additive manufacturing absolutely lives up to the hype. And following the progress made by sneaker designers such as Adidas, which has built up considerable experience in using 3D printers to make trainers, reveals much about how future products could be manufactured.

In the early days, 3D printing was badged as a rapid prototyping tool. Use cases were one-off parts or mock-ups to double-check that all was well before pulling the trigger on expensive injection molding tooling. But, as Adidas and other leading lights have discovered, the appeal of 3D printing for making shoes and other mass-produced goods goes much further.

One of the biggest misconceptions about 3D printing is that it cannot be used to make products at scale. However, the progress made by Adidas and its additive manufacturing partner Carbon in developing 3D printing shoes for the mass market challenges that.

The rise of printer farms – sometimes referred to by operators as ‘warehouses where the shelves make the product’ – makes it possible for 3D printing to compete with injection molding at high volume. And customers will be surprised by just how many parts that modern 3D printing facilities can deliver.

Naturally, the design should play to the strength of the additive manufacturing process and suit speed-ups – such as having the 3D printer automatically eject the part after completing each build – to hit big numbers.

Today, manufacturing experts will recommend 3D printing to customers wanting up to a million parts a year, and the reasons are more than just cost-parity with injection molding. And this is where it’s useful to pick up a pair of 3D printed Adidas 4DFWD shoes to see the manufacturing appeal up close.

The Adidas 4DFWD trainer is an example of a 3D printed shoe.

Forward thinking: the 3D printed midsole used in the Adidas 4DFWD features a variable stiffness bow tie lattice that compresses forward to improve running efficiency. Image credit: Adidas.

Trainers often push the design envelope in terms of the combination of shapes and materials. Sneakerheads view trainers as a work of art, and the lattice-based midsole of the 4DFWD is a thing of beauty. It has attractive mechanical properties too.

Designers of high-performance trainers want to be able to vary the stiffness of the midsole along the length of the shoe – providing comfort upon first strike, but without slowing down the athlete. And having lattice geometries greatly widens the range of mechanical properties that are available from a given material.

What’s more, 3D printed shoes open the door to massive customization – for example, lattices could be tailored to individual runners, factoring in differences in weight and stride length – making products more attractive to customers.

Updatable, updatable, updatable

The customization that goes hand-in-hand with 3D printing helps brands too, as they can update their range whenever they choose. Having to make changes to injection molding tools limits the frequency of product updates to maybe just a couple of times a year. But with 3D printing, it’s done in software – once the new file is loaded, you’re good to go.

Manufacturing updates can be applied rapidly too. For example, as Carbon’s research and development team discovered ways to produce the 3D printed Adidas shoes more rapidly and efficiently, those process improvements could be sent seamlessly to the manufacturing facility without the need for any personnel to travel or change parts.

And it’s telling that other brands have joined Addidas in 3D printing shoes. Footwear designs have been walked down the catwalk and some firms – such as Vivobarefoot – are exploring how the innovative shoe manufacturing technology can support a circular economy for trainers.

Being able to print locally and on-demand can shorten supply chains and reduce the amount of unsold product that goes to waste. If circular economy ambitions can be realized on top of that, the future of mass-produced footwear and other 3D printed products could become even more attractive.

The post 3D printed shoes – footwear signals future manufacturing trend appeared first on TechHQ.

]]>
Connectivity cuts profits for utilities corporations https://techhq.com/2024/02/transmission-lines-america-utilities-companies-lobbying-against/ Wed, 28 Feb 2024 09:30:11 +0000 https://techhq.com/?p=232356

• Technically, the US has a shortage of transmission lines. • Interregional transmission lines would help provision – but potentially hit utility profits. • Power Vs. profit – the ultimate American standoff… Without the power grid, there would be nothing. Or at least nothing for us to write about, and nothing to write it on;... Read more »

The post Connectivity cuts profits for utilities corporations appeared first on TechHQ.

]]>

• Technically, the US has a shortage of transmission lines.
• Interregional transmission lines would help provision – but potentially hit utility profits.
• Power Vs. profit – the ultimate American standoff…

Without the power grid, there would be nothing. Or at least nothing for us to write about, and nothing to write it on; the human condition has become reliant on electricity for more or less everything. So, the system that provides it and ensures it gets across the country, must be well-planned and beneficial to as many people as possible. Right?

That depends on where you are: there aren’t enough transmission lines in the US to connect regional power networks, driving up the cost of electricity, reducing grid reliability and hampering the deployment of renewable energy.

High voltage transmission lines are what move large amounts of energy across long distances, linking power generation to power consumption. If done right, the transmission network contains a web of connections that create a reliable, redundant power supply system of huge scale.

Electricity makes money for utility companies who, being good capitalists with shareholders to satisfy, want to keep hold of as much of it as possible. That means they refuse to pursue (potentially expensive) interregional transmission projects and go as far as actively impeding them, because new projects threaten their profits and disrupt industry alliances.

Utility companies are lobbying against reforms that would lose them money: addressing transmission shortages has long been on the agenda in Washington, but utility firm lobbying continue to ensure delays.

As things currently stand, around 40 corporations own the vast majority of transmission lines in America. Their hold on the backbone of US grids should be scrutinized.

With more transmission lines comes more capacity and connectivity, letting new power plants connect and more power to move between transmission networks. Utility companies don’t want that kind of competition, or for their allies to lose regional control – and so transmission expansion is something they oppose.

The existing transmission networks across America were built largely during the last century by for-profit companies. Nonprofit utility providers organized by governments and communities had some part in it too – but by comparison, a very minor one.

The geographical equations of transmission lines

It makes sense that transmission lines tend to be concentrated around fossil fuel reserves and population centers, but there’s another force at play, deciding where the lines are routed: historic utilities alliances.

Where agreements were made between companies to trade energy, sufficient transmission was built that would allow power to move between their local service territories. Over time, alliances have expanded but there are still non-allied utility companies with comparatively very weak connections.

Expansion opens opportunities for new power plant and transmission developers to undercut profits, taking control over the rules shaping the industry. The value of linking networks is widely accepted around the world – but it doesn’t make money for the American companies currently in control of the grid.

Connecting regional networks is critical to the incorporation of renewable energy. For example, four proposed high voltage lines totaling 600km along the seam of regional networks in the upper Midwest would connect at least 28 gigawatts of wind and solar energy. Although the plans have been around for years, utility companies in neighboring regions haven’t moved forward.

Proposed new transmission lines in the upper Midwest. Via Joint Targeted Interconnection Queue Study (JTIQ), MISO, SPP.

We might learn from the European Commission which in 2018 set a target that each member country would transmit across its borders at least 15% of the electricity produced in its territories. By the end of 2022, 23 gigawatts of cross-border connections in Europe were under construction or in advanced stages of permitting; it’s unlikely those losing profit over the changes were totally on board, but the change has gone ahead all the same.

In the US, building the line across the Midwest would cost $1.9bn, which is a staggering number – until you compare it with the cost of rebuilding aged transmission infrastructure every year.

Not only that, but interregional transmission for renewable energies also significantly reduces the cost of use for consumers. Even if renewables aren’t considered, costs would be massively reduced given that better integrated networks reduce the amount of generation capacity needed and decrease energy market cost. Reliability goes up, too.

What limited interregional connection there was proved paramount in preventing total disaster when Storm Elliott disabled power plants and pipelines from Dakota to Georgia in 2022. Imagine a reality in which localized disruption didn’t mean blackouts for entire states.

Won’t someone think of the profits?!

That isn’t how utilities companies see it, of course. For them, it means a whole bunch of drawbacks. More connections open the door for competitors who might undercut them on price; with profits in mind, having a monopoly is the more efficient choice, but interregional lines threaten utilities’ dominance over the nation’s power supply.

Also, building a whole new power plant in one area generates more money than just building transmission lines from an existing one. Transmission projects also mean competing against other developers for profit from that construction.

There’s some hope in the BIG WIRES Act introduced BIG WIRES Act introduced in September by Senator John Hickenlooper and Representative Scott Peters. The acronym, that’s so handily pertinent to the cause, stands for Building Integrated Grids With Inter-Regional Energy Supply. [Do you ever get the feeling politicians sometimes find the acronym first and work backwards? – Ed]

Hard not to see a case for nationalizing the power grid, but we won’t spell it out. Climate emergency and all, best to keep an eye on the electricity companies though, eh?

Unless you happen to know a friendly neighborhood god of thunder, you’d probably better look to your transmission lines.

The post Connectivity cuts profits for utilities corporations appeared first on TechHQ.

]]>
Affective computing – how to build better digital products https://techhq.com/2024/02/affective-computing-how-to-build-better-digital-products/ Tue, 27 Feb 2024 15:34:17 +0000 https://techhq.com/?p=232372

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing. Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be... Read more »

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing.

Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be able to determine when users are struggling – through emotional insights – and make adaptations.

A popular approach has been to use wearables fitted with sensors, but making sense of the data is easier said than done. Developers are aware that the physiological response accompanying the emotional response to an event can be the same for different emotions. For example, fear, anger, and excitement can all send heart rates racing. To be effective, affective computing algorithms need to differentiate between such signals.

The good news is that machines have many inputs to draw upon when trying to determine the emotional state of users. Many of the devices we use daily have cameras and microphones, which pave the way for image, gesture, and speech recognition. So, the idea of applying affective computing principles to our devices is by no means far-fetched.

Emotion as a service

Speaking on the topic of affective computing (what is it and why should I care?), Håkan Silfvernagel – a consultant with expertise in human-machine interfaces and robotics – mentions Microsoft’s Emotion API, which is capable of receiving an image of a human face and returning one of eight emotions, including happiness, sadness, fear, and disgust.

Other tech firms such as Amazon and IBM have developed emotion recognition capabilities as well. And, engagement analytics provided by Affectiva – whose co-founder, Rana el Kaliouby, wrote ‘Girl decoded’ to share her quest to reclaim our humanity by bringing emotional intelligence to technology – are popular with many of the world’s largest advertisers.

Considering device applications, Affectiva has applied its so-called Emotion AI to automotive settings using algorithms fed with in-cabin camera footage to detect the state, emotions, and reactions of drivers and other occupants in the context of vehicle environments.

It’s not a stretch to imagine soothing music being piped out of a car’s speakers to calm the driver during rush hour traffic – a use case that’s been given to highlight the adaptive properties of affective computing systems.

Lately, advances in AI have provided a major boost to the field – a point made by Rosalind Picard, who wrote the book on Affective Computing and has been working in the area for decades. Sentiment analysis of spoken words and text has never been as straightforward to carry out as it is today, and that information can be fed back into education and training systems to gauge the performance of learners.

Likewise, image recognition and object tracking have come on leaps and bounds thanks to AI advances, and those breakthroughs positively impact affective computing systems.

Apple has found itself having to interpret the expressions of wearers of the Vision Pro to generate what the tech giant dubs personas, which are used as avatars that participate in FaceTime calls.

The more emotional signals that VR systems can provide, the richer the experience for headset wearers. And affective computing will be key for tempting more business users to participate in the metaverse.

Advances are being made in brain-computer interfaces, which could give developers another perspective on users’ emotional states. As the cost of wearables incorporating EEG measurements comes down, this information will add to the data fusion used by affective computing devices.

Signals gathered from the brain could spotlight signs of fatigue, joy, or frustration – to give just a few examples. Also, beyond making software more user-friendly, virtual assistants programmed using affective computing techniques have been shown to encourage creative problem-solving, which hints at further wins for business users.

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>
Bitcoin inventor was aware of currency’s power demands https://techhq.com/2024/02/what-is-the-cost-of-bitcoin-in-environmental-terms/ Tue, 27 Feb 2024 15:30:48 +0000 https://techhq.com/?p=232330

Bitcoin designed to replace traditional finance and gold mining. Court documents say Nakamoto aware of energy consumption issue. Time to divest for the benefit of future generations. A court case currently underway in London, UK, has made several emails more widely known, that were purportedly written by the inventor of Bitcoin, Satoshi Nakamoto. In them,... Read more »

The post Bitcoin inventor was aware of currency’s power demands appeared first on TechHQ.

]]>
  • Bitcoin designed to replace traditional finance and gold mining.
  • Court documents say Nakamoto aware of energy consumption issue.
  • Time to divest for the benefit of future generations.

A court case currently underway in London, UK, has made several emails more widely known, that were purportedly written by the inventor of Bitcoin, Satoshi Nakamoto. In them, they considered the energy use of the Bitcoin network.

The legal case centers on Craig Wright’s claims that he is Nakamoto. The real identity of the inventor of the cryptocurrency is not known for certain, and Wright’s claims, if validated, will mean that he has a significant say in the future development of Bitcoin projects.

Published in Wired last week, Satoshi’s emails contained several comments about the network’s energy consumption.

“If [Bitcoin] did grow to consume significant energy, I think it would still be less wasteful than the labor and resources of intensive conventional banking activity it would replace,” Satoshi said in a message to Martii Malmi, one of the early developers of the technology.

Bitcoin’s energy use

The actual levels of power consumption by Bitcoin are uncertain: miners operate in a highly competitive market and so are not inclined to be particularly transparent as to the details of their operations.

Energy consumption comes largely from two activities in the Bitcoin network that consume electricity – throwing massive computational power at the process to ‘solve for coins,’ and the processing required to handle individual transactions when cryptocurrency changes hands.

A well-accepted metric by the Bitcoin industry on energy consumption is the Cambridge Bitcoin Electricity Consumption Index (CBECI), published by the University of Cambridge’s Judge Business School. The School revised its model in August of last year to take account of the changes in the underpinning technologies and hardware at the heart of the Bitcoin network since 2019. The update is, in part, a “response to evidence indicating a periodic overestimation of electricity consumption.”

Bitcoin network causing polloution illustration.

“Pollution” by sheilaz413 is licensed under CC BY-NC-ND 2.0.

The figure representing the total energy consumption by the Bitcoin network was revised down by 9.8TWh (terawatt hours) for 2022 to 95.5TWh. That places the global system’s consumption alongside nation-states like Belgium and the Netherlands. The paper publishing the Index’s revision details also notes that, overall, the efficiency of Bitcoin mining has increased as hardware advances and refines (albeit now at a slower rate than in the currency’s hayday).

Bitcoin’s environmental impact

The environmental impact of Bitcoin operations is even more complex to estimate than their total energy consumption. Renewable energy is said to power a sizeable proportion of mining operations, with estimates varying [paywall] from around 40% to 75% of the total power consumption. Bitcoin mining operations tend to congregate where energy is plentiful from renewable sources, such as hydroelectric power. In these locations, like certain areas of the US, China, and Scandinavia, such hydroelectric power tends to be cheaper than fossil-derived alternatives.

But environmental damage is said also to come from e-waste comprising of discarded mining rigs, which are superseded by faster, more efficient hardware in generational upgrades. Processing a single Bitcoin transaction is said to produce over 700 pounds of carbon, plus there are additional emissions from data center cooling systems and water usage, to name just a couple of other factors.

The two human activities that Bitcoin’s creator thought might be replaced by Bitcoin, conventional banking and gold mining, still create significantly more negative environmental effects than the entire Bitcoin apparatus, with conventional finance systems alone estimated to produce double the carbon emissions of Bitcoin.

But the slow rate at which Bitcoin transactions can be achieved effectively makes the currency unviable as an everyday means of exchange (there are other networks, such as Ethereum, which are capable of the type of scale required, and which do not use the power-intensive proof-of-work model to mine new coins).

The fact that Bitcoin exists in addition to the activities it was supposed to replace raises the question of its viability. Clearly, the technology cannot be uninvented, and attempts by governments to limit its use have been mostly unsuccessful, with most adopting the accept-and-tax approach to cyrptocurrencies. No governmental control over the Bitcoin was, it has to be said, part of its design remit.

But like renewable energy, which exists as a supplement to fossil-derived power, not as a replacement, Bitcoin and its ecological effects exist in addition to all the consequences of fiat finance.

Grist to the extinction mill

The Bitcoin network’s activities are said to consume the equivalent of around 2%-3% of the US’s annual power usage. Lowering power consumption worldwide, year-on-year, is the most important way to downgrade the status of environmental deterioration from an extinction event to merely a chance of survival for the generation that will live at the end of this century. (NB experiencing survival will still be deeply unpleasant.)

Given that Bitcoin’s purpose at present is just a different flavor of market speculation, and it will not replace conventional finance or gold mining, now might be the time to consider its net utility.

Pollution and the Bitcoin network, illustrative.

“Factory – Pollution” by plagal is licensed under CC BY-SA 2.0.

The post Bitcoin inventor was aware of currency’s power demands appeared first on TechHQ.

]]>