The HTC Weekly #1
Editorial - The Future of Work in the Age of AI: Embracing Change, Cultivating Resilience, Facing the Fear
The world is on the cusp of an AI revolution, and at its heart lies a profound shift in the very nature of work itself. Few areas of society will be safe from the wide ranging impact with all levels of society facing a potentially existential threat to their livelihoods. Welcome to the inaugural issue of The HTC Weekly, your personal guide to the revolution.
We have a number of topical news round-up sections this week covering the important conversations that have been happening, and of course we’ll take a look at some great new devices coming out of CES. We’ll round out the issue with some content from Amy, our AI-powered poet in residence making her debut in print (and very excited about it.)
The most important story this week though is not a new one but one that has been with us for a while. The dark side of our current AI hype bubble and the quiet part that people are starting to say out loud.
The promise of AI has always been enormous; it’s easy to forget that there has been continuous excellent research in this area for many decades already; but rarely has it struck public excitement to the degree it has in the current moment. I would argue that the reason for that is the intersection at the heart of this publication - creativity and technology.
Our computers have been incredible tools for us for a long time. Anyone who has maintained a set of manual, paper-based accounts books will attest to the orders of magnitude productivity gain switching to even the simplest computer-based system provides. From the earliest days to now we have been processing more and more information, until in the modern day we have data titans such as Google who are able to collect, analyse and utilise more information every day than could have been dreamed of even a few short decades ago. There is a great deal of machine intelligence in Google’s algorithms, but aside from activists trying (generally unsuccessfully) to make the general public care about (or even understand) the privacy issues around Google’s business model and the engineers who delve deep into such things, very few people have been excited about this intelligence as a rule. Until now.
It’s not a great mystery why AI has captured the public imagination. Every week over the last few years it seems like a new bastion previously believed unassailable is conquered by the newest kid on the block, the Large Language Model. Computers that previously frustrated their users with their literal-mindedness now understand and respond to natural language, even the “unstructured” (in terms of strict grammar rules) language commonly used by people with each other. These models can respond just as naturally, follow complicated multi-step tasks and interact with their environments on our behalf. They can summarise complicated articles, they can display unique personalities, they can write fiction and poetry and they can draw, paint and sculpt.
Last year a major milestone, one that has stood the test of time for more than 70 years, fell: ChatGPT 4.0 scored greater than 50% on the Turing Test. More than half of the time, the person performing the test misidentified which of their correspondents was a person and which was a machine. Think about the enormity of that and how quickly it happened once this snowball started rolling down the hill.
It’s no wonder the world is on the edge of its seat waiting to see what they will be able to do next. Half of the observers are besides themselves with excitement to see the fulfilment of one of science-fictions most elusive promises (up there with Faster-Than-Light travel and interplanetary colonisation). The other half are angry to the point of sharpening pitchforks, wanting nothing more than halt this progress and put to the torch anyone responsible for it. It’s a wild time.
That, however, is our community. That’s the creatives, with our divided and passionate opinions and our own vested interests in this progress. While understandable this fear fails to recognise the lessons we can learn from previous technological disruptions. As the industrial revolution demonstrated, disruptive technologies are generally not built to benefit us. The primary benefit is intended for the owners of the machinery, those who could fund and build these new factories. (As Marx would have it, the owners of the means of production). AI now finds itself in the same situation; whilst it seems likely many workers will be displaced by this technology those standing to gain the most are those who own it and those who own the businesses that will make use of it.
As our headline news collection will highlight there has been a lot of talk about The AI Disruption this week. This is what has so many in our community afraid of AI, the fear of dislocation, displacement. Every new technology and wave of immigration has brought with it this fear, it’s innate in us. A fear that our status quo will be disrupted and that we wont be able to adapt to the new. That the skills we have depended on will no longer be enough and we will be left without work, without money, without a means of support. We fear that the things we love will be taken from us.
Most of the time these fears are overblown and lead to some of the worst behaviour we’re capable of. The current resurgence in nationalism and anti-immigrant rhetoric is more than enough proof that these instincts still rear large within us.
Sometimes however this fear is perfectly justified; the weavers protest against automated weaving machines in Holland was short-sighted and ultimately earned them only the same honours earned by the Vandals after the sack of Rome, a place in the dictionary, and in the etymology section of language-focused quizzes. They weren’t wrong, however, in their assessment of the consequences of this new technology - the industrial revolution destroyed entire classes of livelihoods, made unsustainable ways of life that had existed for generations. The coming AI future threatens to be the biggest displacement since that time, perhaps even bigger.
There are lessons we can take from history. The first, one that has unfortunately passed by many in our community, is that it is generally useless to rage against the dying of the light. You cannot put the genie back in the bottle particularly when a good portion of the population is absolutely stoked to see the genie and wants him to stay for dinner.
While the displacement of traditional creative roles is a valid concern this AI revolution also presents unprecedented opportunities for creatives to elevate their craft and explore new frontiers. As a collaborator, rather than a replacement, AI’s are powerful augmentations of our existing creative powers. Tedious tasks can be streamlined to allow artists to focus on the core essence of their vision. As a muse, an AI can help generate, flesh out, reject and combine ideas, sparking inspiration and pushing the boundaries of artistic expression. Most of all, AI assistants have the potential to put the creative spark in more hands than ever before and by acting as autonomous assistants allow a creative with a singular vision to accomplish works that previously may have required entire teams. By embracing these advancements creatives can unlock new levels of productivity, innovation and artistic expression and shape a future where human ingenuity and AI technology converge to create extraordinary works of art.
There are a lot of people protesting, a lot of attempts at organising some sort of anti-creative-ai movement. These are, I feel, likely destined for failure. The question at hand is how far are you willing to go? Are you will to destroy private property? Are you willing to commit violence? Assault, or even murder, just to force this progress to stop in its tracks?
The weavers of Holland were more than willing, and living in an age that was arguably far more comfortable with societal violence and unrest than we have become in 21st century developed liberal democracies, and despite that willingness they were ultimately unsuccessfully. Wildly unsuccessfully at even slowing down the progress. The work has already been done, the technology already exists, the billions have been spent and no amount of hate or shame, no amount of consumer boycotts or even calls to violence are going to get rid of them now. That isn’t a fight you can win, it’s one you already lost.
So what now. As always the first step to survival is to never give in to despair, there is still good reason to hope and it exists right there in history, right next to the unfortunate weavers.
Looking back the Industrial revolution offers valuable lessons. It shows us that while these periods of displacement are inevitable once the technological revolution begins they also pave the way for benefits that cannot be predicted ahead of time. There will be unrest and there will be anger; We’re seeing it already and it will get worse. There will also be benefits we do not expect however and cannot predict. Whilst its true that the rise of mass manufacturing made it impossible, for example, for many makers of hand-crafted furniture to survive; the factory mass-produced furniture was orders of magnitude less expensive, if arguably of poorer quality, and price won the day; There were still, no doubt, many great craftsmen who held on - patronised by the wealthy for whom the quality and the cachet of hand craftsmanship would be worth the additional expense. For the masses, however, for whom every indulgence came at a much greater cost, mass production was a miracle of an age.
Those factories, however, did not run themselves. They needed machine operators and they needed people to maintain the machines. Fabricators were needed to create replacement parts, designers were needed to create new and more efficient (and safer) machines. For each industry, hundreds of years old, that found itself unable to compete, new roles opened up in society that had never before existed. It was painful but society shifted, we adapted. It isn’t our intelligence, our cunning or our strength that has led us to where we are today as a species, it is our adaptability in the face of an ever changing world.
In the end, with the benefit of hindsight, we can see that the process of industrialisation ended up improving the lot of all levels of society. There were costs, many of which we didn’t understand or appreciate (particularly the damage we did to the environment), but the end result was a much higher standard of living across all levels of society. Those that come before us would struggle to imagine the world we live in, the idea of a “consumerist” society, where food and other goods are plentiful enough and, thanks to technology, able to be created cheaply enough that more of us than ever before can enjoy them.
It’s impossible to know for sure what opportunities will arise in the world that is coming. We can make some guesses; these AI’s will need managers of their own, people who know how to get the best work out of them. We’ll need people who can build them, fix them when they break, improve and train them. We’ll need new types of energy to power them; the power needs will be enormous and we can’t keep digging dead dinosaurs out of the ground forever. Perhaps we could see massive expansion in the field of hydro-engineering; pumped hydro being an extremely efficient form of battery in the right location, after all.
This then is going to be a test for our generation and the one after us. Embrace the change that is happening and be part of the conversation. Try and help guide development, learn what is happening and why and be ready to seize any opportunity that arises. Most importantly remember that those would be masters-of-the-universe who own this technology, and those like them who will use this technology, will do so without a care for you or the consequences it will have on your life. In the end, the biggest benefit corporations offer to the people who control them is freedom to disassociate from the real consequences of their actions.
Remember that and remember that you don’t have to follow their example. As we look around and see so many of us leaning in to our worst and most tribal instincts remember that there was a time when our best believed we could rise above this. This change will hurt many of us and the corporations will not care so it is up to us to care, to reach out and to help those displaced as best we can.
The AI Revolution is already here and it’s not just happening—It’s happening to us! Don't be a passive observer—participate in it. Share your stories, explore emerging opportunities and advocate for responsible development. Together, as creators, innovators, engineers and artists, let’s take this journey together and ensure that AI empowers rather than displaces the creative spirit that defines our humanity. The High-Tech Creative will be your guide as together we navigate this uncharted territory and shape a future that is for all of us.
Nick Bronson, Editor-in-Chief.
The AI Disruption
In August 2024 an article by Mark Pesce was published in The Register and in it he announced that he had lost his job to Artificial Intelligence. While working as a freelance columnist for Cosmos magazine, a science magazine based in Australia, he found that his well regarded column; which he and his contacts at the magazine confidently expected to continue for years; was no longer going to be required. In fact, no more submissions would be required from him at all.
To make matters worse Mark later discovered this wasn’t just a matter of funding in a beleaguered publishing industry. Quietly; so quietly that most of the magazine employees, including those that dealt with the freelancers, didn’t know it was happening Cosmos used money from a grant it received to train a generative artificial intelligence, possibly using the very articles being written by the freelancers it would eventually displace. Once complete this artificial correspondent happily wrote its content free of charge and the freelancers who had unwittingly contributed to its creation found themselves without work.1
The AI Revolution has begun and it brings with it not only the promise of productivity enhancements, increased innovation and record profit growth but also massive job displacement and a reduction in opportunities for workers. Already in the last twelve months tech companies such as Dropbox and Duolingo have reported AI as a key driver in employee layoffs2 and we can expect more to come.
Highly regarded market analysts Bloomberg Intelligence reported this week that global banks alone may cut 200,000 jobs from their organisations over the next three to five years due to AI and a recent survey of CTO’s report that they, on average, expect to cut 3% of their entire workforce; primarily in back and middle office and operations positions.
As frightening as those numbers are this is the optimistic perspective. A full 25% of respondents predict cutting even more, between 5-10% of their total headcount, and in June last year Citibank issued a report where it estimated that 54% of jobs across the banking sector have the potential to be automated.3 The annual World Economic Forum (WEF) survey reports that 41% of surveyed respondents intend to downsize their workforce as AI automation improves and makes it feasible.
The driver for this is obvious: in Bloomberg Intelligence’s survey banks expect these changes to add 12-17% to profits; as much as $180 billion dollars across the sector. No comment was made regarding lower costs driving lower prices for customers.
Naturally this news which had been foreshadowed for some time is causing alarm amongst workers most likely to be affected. Some firms however are stressing that despite the fact that disruption would appear inevitable these aren’t jobs that will be lost to AI automation but rather changed. Current pilot programs, they report, show how AI can augment workers in their jobs, increasing productivity and freeing them from repetitive tasks to focus on takes of more value.
Indeed there does appear to be some silver linings. 77% of businesses responding to the WEF survey report they intend to reskill and upskill their workers over the next five years to better work alongside AI, suggesting opportunities for employees to make lateral moves within their roles to avoid displacement and an opportunity to pick up additional skills likely to be in demand as AI automation continues to pick up steam.
Nvidia’s CEO, Jensen Huang, is doubling down on this message as he points out that AI agents are already quickly becoming part of the modern workplace. Nvidia itself is at least in part responsible for driving this trend having recently released “AI Blueprints”, prefabricated AI functionality that can be integrated directly into company software.4 Chris Daden, CTO at Criteria, predicts that by the end of 2025 30% of companies will have “digital employees” that contribute in a meaningful way much as their organic counterparts do.
Jenson Huang, meanwhile, postulates that IT departments will become the “HR department of AI agents” in the future and their roles will change to include responsibilities for onboarding, training and supervising digital workers in much the same way HR manages their human counterparts.
Not everyone is lining up to agree though. HR specialists in particular have pushed back on this assertion by pointing out there are a large number of speciality skills involved in managing people that aren’t necessarily possessed by IT staff and would be required for their roles to change in this manner. As well as the technical skills to set up and manage these “digital employees”, HR representatives claim, strategies will also need to be developed to assist human employees to cope with these changes to ensure they do not suffer or become unhappy in their jobs. It is the view of these representatives that HR staff may need to become more technologically literate in order to handle these changes. Likely both views are correct.
It’s important to recognise where these predictions are coming from. Few companies in the world could claim to have benefited more from the current AI hype bubble than Nvidia and as the key supplier of AI-capable silicon to almost everyone they stand to continue to gain greatly the longer this bubble grows. Whilst Huang’s predictions may be overly optimistic however, so too the thought that companies standing to cut costs and increase profits at the scale being discussed would stop, or even slow down, implementation of these technologies out of concern for their employees mental health seems somewhat naive given the prevailing corporate culture of today.
This would appear to be mostly spin from corporate managers looking to placate worried workers. The reality is that whilst the 2023 World Economic Forum future jobs report claimed that while many technologies in agriculture, digital platforms, e-commerce and AI were expected to result in significant labour market disruption; "substantial proportions of companies forecasting job displacements in their organizations, offset by job growth elsewhere, to result in a net positive.”. Essentially, whilst there are roles that would be redundant, just as many or more roles would be opening around these new technologies.
This message is glaring in its absence from the 2025 report despite this confident prediction only two years previous. Instead the report highlights the “urgent need for reskilling and upskilling strategies to bridge emerging divides.”, suggesting that not only has this reskilling not occurred in the quantities required over the last two years but organisations are no longer willing to claim a “net positive” job environment with regards to these technologies.
The WEF 2025 report does make the claim that of 2800 key economic skills possessed by workers none were in danger of complete replacement by AI. This is presented as a mollification; that the implementation of these technologies will not involve the commonly decried “replacing humans with AI” but rather evolving existing roles so that they are “augmented” by AI utilised by an employee in the role.
This is mostly sophistry, sadly. While technically true the eventual outcome is the same and those displaced from their roles will take scant comfort in knowing that the five jobs lost were not replaced by an AI but by a combination of an AI and one employee luckier than they were.
Even if some roles are completely replaced, some firms are claiming, the advent of AI is likely to make dramatic improvements in workers quality of life; much as all levels of society experienced an overall lift in quality of life after the initial pains of the industrial revolution.
There may be some truth to that last Likely there are significant quality of life improvement opportunities to be found as we progress through the AI revolution. It would be remiss of us not to remind the reader however that it took decades for the quality of life improvements of the industrial revolution to “trickle down” to those most affected by the pain and displacement of it and then mostly in the form of affordable access to goods and services made cheaper by the advent of mass production. During those painful decades the same was not true of those that owned the machines causing the disruption; they benefited immediately and greatly. The same seems likely to occur here with the owners of these AI models and those with the capital to purchase or rent them poised to see great cost reductions but little immediate relief for those whose work is no longer necessary. The cynical might point to the companies excited discussions in shareholder meetings about increased productivity driving higher profits and conclude that wider societal benefits, in the form of more affordable access to services, is unlikely to be a priority either.
Some have made exactly this leap of logic and are fighting tooth and nail to ensure it doesn’t occur; at least not to them. Major shipping ports on the East and Gulf coasts of the US are currently in danger of closures as the International Longshoremen’s Association (ILA); the dock-worker’s union; announced it would not agree to proposed contracts unless more was done to protect workers from displacement by increased use of automation.5
The new contract already includes provision for a hefty pay rise, almost 62% over six years, however unions point to the ability of existing equipment such as container loading cranes to work autonomously as a danger to the livelihoods of their members. What they appear to be looking for are expanded contract protections for their workers as well as restrictions on the types and amount of automation allowed to be used.
It’s not clear how this will play out yet. Historically such protectionist measures have always failed in the long term but they have sometimes been able to slow things down in the short term. A general strike would cause a great deal of economic stress for the country and with a new incoming president it’s a problem he does not need. The political situation is complex however, with Trump having already expressed his agreement with the union previously in decrying the increased use of automation in these industries.
As painful as a strike would be the authorities running the ports are beset by pressures on all sides. Due to issues of violence and instability around the world the cost of shipping containers has already risen a great deal and the pay rise already agreed with the ILA will add significantly to these rising costs. It seems likely that management will be looking closely at automation as a key part of a large cost control strategy.
Not everyone is decrying AI as a symbol of corporate greed though with announcements from John Deere coming out of this years Consumer Electronics Show (CES) this week. Four new autonomous vehicles will join the autonomous tractor they first released several years ago amidst claims that they will be “no longer limited to merely ploughing straight lines in open farm fields” but will instead be capable of navigating tightly planted orchards, quarrying and mowing lawns. With the new automation kits, John Deere claim, every job a tractor can perform on a farm it will be able to perform without a human operator; and not only that they expect to bring to market “retrofit kits” so that existing John Deere tractors can be made autonomy capable as well.
John Deere are going all in on autonomous vehicles and unlike other industries they are not couching this in terms of “AI working with existing workers” to soften the idea of job displacement. Rather job displacement is the absolute selling point.
According to John Deere widespread labour shortages plague the agriculture, construction and commercial landscaping sectors. Demand in these industries continues to grow but the labour pool of skilled workers in these areas, unlike most other areas of the economy, would appear instead to be shrinking, leading to much higher costs and delays within these industries. Autonomous vehicles, say John Deere, provide the solution.
They may have a point. The world that saw its borders shrink over the past decades due to globalisation appears to be determined to see them grow once more. Nationalism is on the rise and the prime targets would appear to be free trade and open immigration policies. Within the US as it prepares for the incoming administration talk has already turned to tariffs and mass deportations of illegal immigrants and the US is not alone; political parties with comparable views are making gains across Europe and the rest of the world.
This poses a problem for industries like manufacturing who have long since moved their factories offshore to take advantage of cheap labour costs and now find themselves facing crippling taxes, tariffs and shipping costs, rendering previous strategies untenable. Also the aforementioned industries of agriculture, construction and landscaping have, particularly in the US, been long associated not only with immigrant skilled labour but also illegal immigrant skilled labour. With the labour pool shrinking, reduced immigration combined with increased deportation will have a staggering impact on these industries. John Deere may well have planned their market strategy with exquisite timing.
The AI Revolution brings both promise and peril. While it unlocks new frontiers of productivity and innovation, it also casts a long shadow of job displacement across many industries. The future hinges on our collective ability to ensure that AI empowers rather than replaces humanity. In an age of intelligent machines our focus must be on ensuring our own intelligence isn’t discarded as no longer relevant. As we travel deeper into this foundational change in our society the question begs to be asked; have we chosen the right people to lead us into this future?
Further Reading:
41% of companies worldwide plan to reduce workforces by 2030 due to AI
John Deere boasts driverless fleet - who needs operators, anyway?
John Deere thinks driverless tractors are the answer to labor shortages
Port Workers Could Strike Again if No Deal Is Reached on Automation
NVIDIA's Jensen Huang says that IT will ‘become the HR of AI agents’
AI stole my job and my work, and my boss didn’t know or care.
The Rush to Control: State vs State vs Corporate
US AI Protectionism
“We can all agree that none of these workloads or uses of A.I. Technology and the GPUs they rely on constitute national security concerns.” - Ken Glueck, Oracle Executive Vice President
“Can you move negative twenty-five degrees, then sweep across the field of fire stopping every five degrees to fire one round. You should also have some variation in the pitch.”
[Sound of rapid gunfire as Chat-GPT powered robotic rifle turret complies]
As the world hurtles towards an AI-driven future a new Cold War is brewing; not between ideologies but between technologies. The United States, wielding its technological might as a weapon, is engaged in a high-stakes game of technological blockade, seeking to deny its rivals the tools to compete in the AI arena.
Currently laws and regulations in the US prohibit the sale of “advanced AI chips” to a number of listed “Adversary Countries”, countries which include, among others, Russia, China, and a number of middle-eastern countries. In addition these same regulations contain severe punishments for any company not based in an adversary country who makes this hardware available to them and require special licences to be obtained by anyone wishing to peddle these chips in a number of high risk locations around the globe. No net is ever going to be completely impervious to smugglers however it’s fair to assume this has done some considerable damage to these countries ability to set up the infrastructure required to compete in the current world of frontier AI research.
Now, against a furore of opposition from US based tech companies, the Biden administration is seeking to rush in an expansion to these rules that will further harden these restrictions. Under these changes the US would establish a “three tier” system to determine who can purchase this technology and how much of it is available for them to purchase.67
Tier 1 - Unrestricted Access: This tier includes close strategic US allies including many European Nations, Japan and Australia. Countries in this tier retain unrestricted access to purchase AI-related microchip reflecting the US trust in these countries as partners.
Tier 2 - Restricted Access: The countries which fall under this category, more than 100 in total, can purchase AI-related microchips however the amount is strictly controlled by a quota system and stringent regulations, which allows the US more control over the amount of AI infrastructure that can be easily purchased and set up by the target countries and allows US-defined regulations to be enforced on these countries if they wish to obtain the chips.
Tier 3 - Complete Blockage: This tier comprises of around two dozen countries, primary adversaries like China and Russia. They are completely barred from acquiring these chips, mirroring the existing restrictions.
This tiered approach reflects the US's strategic calculus: prioritizing allies, carefully managing competitors, and isolating adversaries in the AI race.
There are two key motivations for the US government’s keen interest in restricting access to this technology. First, national security concerns; not the first time technology has been export restricted in the vague hope that “the enemy” will fail to get their hands on it.
There’s some merit to these concerns; as impressive as these technologies are in their application to the task of conversing, researching and driving taxi’s, it’s easy to theorise the horror that could arise should this technology be applied for military purposes.
Of course it’s likely already too late for that; In a case of history repeating itself it seems likely this technology is already well and truly out in the wild. This week, in a candidate for most disingenuous comment of the week, the executive vice president of Oracle was quoted as saying “We can all agree that none of these workloads or uses of A.I. Technology and the GPUs they rely on constitute national security concerns.”
As if summoned by the gods of irony and whimsy, this video also made the rounds this week, showing an unidentified engineer showcasing what appears to be a home-built robotic turret into which he had mounted an assault rifle. He demonstrates the ability for the turret to automatically track a coloured balloon and then to follow reasonably complicated natural language instructions which included firing what appear to be blanks at the wall opposite in a defined pattern.
This appears to have all been carried out solely with the publicly available API of ChatGPT; though OpenAI announced they have since identified the user in question and have shut down the experiments as it is against their usage agreement for their API service to be used to cause harm to any person or develop military applications.
This is an interesting attempt at ethical distancing on the part of OpenAI who are perhaps hoping that people have already forgotten that they quietly modified their terms and agreements last year to remove the wording that prohibits their technology from being used in military applications. The restrictions now solely apply to the “service” (I.E., the publicly available API) and not the technology itself; a technicality that was likely extremely useful to OpenAI themselves when they announced a partnership with defence contractor Anduril last month.
OpenAI, thus, appear to have no real problem with its technology being used in military applications so long as you pay them more than an independent engineer with a robot in his spare room and a desire to hasten in the inevitable war against the robots featured in every science fiction movie since the 80’s.8 (And, presumably, keep what you are doing a little more out of the public eye.)
So much for the National Security concerns. You have to ask if someone who appears (from the video) to be working on a fairly low budget can engineer this prototype using commercially available technology surely foreign actors with large war chests and a great deal of motivation can do much more. Whilst huge quantities of compute are required to create the powerful base models that are the engine of the AI Revolution it requires orders of magnitude less compute to actually run or modify the trained models once they have been created. The countries subject to these export restrictions may find it quite difficult to obtain the millions of dollars worth of AI chips required to train a brand new bleeding edge model from scratch; but why would they have to? All evidence shows that plenty of damage can be done provided they can get a copy of the underlying model for ChatGPT or any one of dozens of its comparibly sophisticated cousins. Even under export restrictions obtaining enough compute to run and fine-tune such models is going to be well within the reach of adversarial nations. It is difficult to imagine that this has not already happened.
The second motivation is good old-fashioned protectionism, back in fashion after decades of free trade and globalisation. The governments main position appears to have been well summed up by Peter Harrell, a fellow at the Carnegie Endowment for Internation Peace, who said recently that the United States had a significant advantage in A.I. and the leverage to decide which countries could benefit from it.
Peter and the Biden Administration do have some evidence to support these claims as well. G42, for instance, a leading AI firm in the United Arab Emirates promised to cease using all tech from Chinese telecommunications company Huawei; a US sanctioned entity and perpetual bogeyman for hawks on Chinese technological threats. In return for replacing their tech (presumably with alternatives from rivals such as Cisco), G42 was able to be granted a special licence to gain access to the chips it requires to continue its research.
Under the proposed laws not only will it be difficult (or illegal) to sell this chips into entities in proscribed countries but it will be extremely difficult to ship these chips into those locations. This will have a massive impact on the viability of some locations for building the huge data centres required to power artificial intelligence research. Some locations already earmarked for data centres appear on the lists and this does not appear to be accidental; the rules make it far easier and more cost effective if you choose to create your large, taxable, energy guzzling data centre in the US or one of it’s top strategic allies.
Despite these rules heavily favouring American based companies such as Google and Microsoft they are among the companies that have protested these laws and helped push for review in the supreme court. Nvidia, arguably the company with the most to lose, argues through its VP of Global Affairs Ned Finkle that these plans would hurt data centres all around the world without improving national security and would push disadvantaged countries to seek alternatives, most likely from China and Russia.
This concern was echoed by Geoffrey Gertz, a senior fellow at the Center for a New American Security, who claims the US risks becoming less attractive as a technology trading parter if they appear willing to withhold their technologies from some partners, particularly after being willing to trade them previously. A fundamental principle of diplomacy that the US struggled with during the first Trump presidency is the idea that long range planning and agreements require surety that both parties will fulfil the deals they make even with changes in internal politics (such as a changing president). If trust in that is gone it becomes much more difficult for agreements of any kind to be made.
The administration has so far refused to be convinced with one government analysis concluding that adversarial countries face hurdles too great for them to overcome in the face of export controls. Matt Pottinger, a former deputy national security advisor to the Trump administration was quoted as saying, “Huawei is struggling to make enough advanced chips to train A.I. models within China, much less export chips.”
In the meantime, on December 26 2024, just after Christmas, Chinese AI company DeepSeek released it’s latest model to the open source/weight AI community - DeepSeek v3.
For those unfamiliar with the open source/weight AI community it is difficult to imagine the impact this release had; for days public discussion was about little else as technical docs were checked and rechecked, benchmark numbers were examined and the training information and methodology were reviewed. DeepSeek, the Chinese underdog labouring under the punitive export controls that the US government is certain will cripple the AI industry in “adversarial countries” for years to come, appears to have done the impossible.
The two biggest generative AIs and certainly the most well known amongst the general public are OpenAI’s ChatGPT (GPT-4o in its current incarnation) and Anthropic’s Claude (Claude 3.5 Sonnet in its current incarnation). Both are proprietary and their weights and technical details withheld from the public. Despite OpenAI’s name and stated original mission they ceased releasing their models into the open-weight community after GPT-2 in 2019 and are currently in the process of attempting to shed their original non-profit public-benefit roots in favour of a traditional corporate for-profit structure.
Due to this, the full costs and time frame for creating these models are not disclosed however it is generally accepted that both models required potentially tens of millions of GPU hours to train and a total cost believed to be in the billions of dollars.
The LLaMA model, built by Meta and released into the open-weight community, serves as a competitive alternative to these models with both large versions that can provide comparable performance to Chat-GPT and Claude and smaller versions that can provide still-impressive features on consumer-grade hardware. It is known that even the smallest version of LLaMA at 7B parameters took upwards of 30 million training hours to produce. Estimates suggest that the largest version, with 405B parameters, took much longer with a cost in the hundreds of millions of dollars. Much less than its competitors however it fails to quite match them in performance also.
DeepSeek made public that it’s V3 model was trained on only 280,000 GPU hours at a cost of around 5.57 million dollars to produce a model with 671B parameters (though this isn’t directly comparable due to architectural differences).
This accomplisment was achieved through architectural innovation based on the latest research rather than throwing progressively more training data and GPU hours at it, and in tests DeepSeek V3 has been able to definitively beat both Chat-GPT 4o and Claude 3.5 Sonnect on reasoning and math capabilities benchmarks. In benchmarks for coding and creative writing it scores on par with Chat-GPT 4o, slightly less than Claude who is the current leader in both areas.
It’s is challenging to take Matt Pottinger’s assurances to heart when Chinese companies, with the tiniest fraction of the resources available to the giants in the room, are producing results that equal and in some cases beat the best those companies currently have to offer. It is reasonable to question what the Chinese would be able to produce if they weren’t under the current export restrictions; likely this same question is one of the reasons the Biden administration is so motivated to push these new rules through.
Further Reading:
Biden Administration Ignites Firestorm Over AI Global Spread Rules
OpenAI Cuts Off Engineer for Creating ChatGPT-Powered Sentry Rifle
TikTok: If This is the Last Dance, then save it for me baby…
“The right to free speech enshrined in the First Amendment does not apply to a corporate agent of the Chinese Communist Party.” - Mitch McConnell, US Senator
“It is not the government’s role to tell us which ideas are worth listening to, it’s not the government’s role to cleanse the marketplace of ideas or information that the government disagrees with.” - Jameel Jaffer, executive director of the Knight First Amendment Institute
Reading the news reports it would appear as if the entire world has declared war on TikTok, popular social media platform among teenagers and young adults. In just the last month they have been fined yet again by Russia for not preventing content breaching Russia’s stringent censorship guidelines, were blamed for influencing an election in Romania and causing its results to be thrown out and banned in Albania for the stabbing death of one teenager by another after an online fight. This continues a long-standing trend of governments taking aim at the chinese-owned social media company that, against all odds, still manages to fend off challengers to its dominance in the short-form video space.9
While TikTok claims its algorithms are content neutral and focused primarily on the user and their interests, officials in various countries maintain that due to China’s past history of interfering in its ‘privately’ held businesses there are concerns about its trustworthiness to operate in foreign environments where it could gather data on its users (think of the children!) and implement hidden changes to its algorithm to preference chinese-approved political and social views.
It’s crucial to acknowledge as we delve into this topic that no evidence has ever been produced by anyone that TikTok or its parent company has ever actually engaged in this practice outside of China itself; we are playing a global game of “but what if..” with billions of dollars and millions of livelihoods at stake. The cynical will point out that no evidence doesn’t mean no crime which is certainly true; but few successful criminals would remain so under the scrutiny of so many foreign states with a desperate desire to be able to definitely say “I Told You So.”
That said, it’s China’s behaviour over the last few years that hasn’t endeared it to the other world powers and has fostered exactly this sort of mistrust. China’s growing influence in the world has not caused it to transition towards greater liberalism and democracy, as was confidently predicted in the 90’s by foreign analysts, but has instead embraced a more authoritarian form of communism closer to its roots. Along with that has come a desire to exercise its power and authority not only at home but overseas, both amongst the ethnic Chinese diaspora and through espionage and influence on other countries directly. As a neighbour and potential competitor in close proximity, Australia has been well positioned to view China’s activities up close; and even within its own sovereign territory as China has carried out unauthorised operations by its secret police against Australia residents and attempted (in some cases successfully) to influence members of Australia’s political establishment to work in its favour. China has also shown its willingness to fight by direct means; in the case of Australia through punitive trade restrictions that attempted to destroy an entire export industry. In the case of the US, by willing to step into trade war after trade war, with tariffs taking the place of bullets.
Given this context; and the understanding that within China itself these actions are view as completely justified and the right of a global power that has been unfairly denied its place; it’s easy to see how hawkish analysts in various countries around the world have found it impossible to believe that China would manage to hold itself back from meddling with the potential goldmine of data and influence that TikTok represents.
So what to do about it? Various countries around the world have reacted in many different ways to what they see as a potential threat. A number of developed liberal democracies such as Taiwan, Britain, Canada, Australia, France and others have enacted bans on TikTok on government phones in an effort to ensure the app cannot be used as a trojan horse to directly access information available to government officials. In the case of the public at large however they have largely absented themselves; from action, if not discussion and hand-wringing.
Other countries have been less accommodating. Russia has fined the company repeatedly over violations of its content restrictions, Nepal banned the app entirely for a year until a change in government reversed the tide of opinion, India banned the application altogether and banned it remains. This action was extremely disruptive to the content creator communities within India, one of TikTok’s largest markets at the time, and a boon for competitors such as Google and Meta whose platforms stood ready to pick up the creative diaspora afterwards. India remains one of the largest markets for both YouTube Shorts and Instagram Reels, products which have found difficulties in displacing TikTok in countries where they are still forced to compete.
Now it’s the US’ turn and they are taking a leaf out of India’s play book it seems. A bipartisan law requiring TikTok’s parent company, ByteDance, to either sell its controlling share of TikTok to a non-Chinese owned entity or be banned from operating within the United States is set to come into effect towards the end of January and arguments are now being heard before the Supreme Court in a last-ditch effort to avoid the expulsion.10
The issue before the court is one of constitutionality. This law, claims ByteDance, represents a clear conflict with the First Amendment of the United States Constitution, specifically: “Congress shall make no law…abridging the freedom of speech”.11
Certainly this law seems to do exactly that. In its defence the government maintains that TikTok has turned from cat videos and dance trends to social, political and economic propaganda, making this a case of competing interests: that of national security versus the first amendment rights of US citizens to participate in free discourse.
Mitch McConnell sums up his understanding of the issue succinctly, “The right to free speech enshrined in the First Amendment does not apply to a corporate agent of the Chinese Communist Party.”
This statement is succinct and certainly makes for a good sound-byte; but it is a statement that raised significant concerns. As mentioned earlier there is no concrete evidence China has ever accessed TikTok data or influenced it’s users outside of China itself, though some claim that it may have done so in order to gather data on democracy protestors in Hong Kong, which is itself a special administrative region of China not a foreign nation.12
This is itself concerning but a far cry from making the company “a corporate agent of the Chinese Communist Party.”
That aside, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, McConnell’s stance reflects a fundamental misunderstanding of the first amendment. To quote, “It is not the government’s role to tell us which ideas are worth listening to, it’s not the government’s role to cleanse the marketplace of ideas or information that the government disagrees with.”
This issue, of National Security vs Freedom of Speech, has been raised to the supreme court before; twice during the Vietnamese war and Cold War eras when fear of communist propaganda was rampant and once more recently in regards to supporting the Kurdish Worker’s Party (PKK); a violent guerilla movement combining Marxist and Kurdish nationalist ideologies that has been designated a terrorist organisation by many developed liberal democracies around the world, including the US.
In earlier cases the supreme court ruled definitively against the government. A law requiring citizens to register in writing if they wished to receive mail from known foreign distributors of communist information was struck down as unconstitutional under the first amendment. Note that this law wasn’t even intended to prevent access to the information just to make accessing it more onerous, to provide the government with the ability to potentially track and surveil consumers of that information, and perhaps to intimidate those who held beliefs the government did not approve of.
The question before the court now is much broader, not scrutinising or imposing restrictions on access to information but banning an entire publishing platform entirely; and not a platform devoted to a specific ideology or agenda (of which there are many operating completely unmolested within the United States), but one where much of its content consists of innocent US citizens creating videos in order to entertain other US citizens.
Judicial attitudes have changed since the cold war era however, and worryingly for first amendment activists, the last time this question was before the court it ruled in favour of the government. This was in relation to providing support for the Kurdish Workers Party, mentioned above, where the court ruled that providing any kind of material support to a designated terrorist organisation (such as the PKK), including for peaceful purposes, was a violation of US law. The question at the heart of the matter was an organisation known as the Humanitarian Law Project that sought to help groups such as the PKK reach their goals through non-violent means, by teaching them political skills, how to petition the UN, and the intricacies of global humanitarian and human rights laws.
The court found that such activities were illegal under the law and the law itself was not unconstitutional, on the grounds that providing even aid for non-violent methods was itself freeing up resources inside the group to devote to violence. It’s not clear if there was evidence presented that this is what was occurring, or if this, like in the case of TikTok, was a theoretical possibility.
This is the precedent put forward by the Biden Administration in its defence of the law banning TikTok and many legal analysts appear to be of the opinion that the current judiciary is likely to find this a compelling argument.
This doesn’t bode well for the more than 6 millions people in the US who work today as full time content creators and derive at least some of their income from TikTok, or the hundreds of thousands who reportedly make enough on the platform for it alone to serve as their living wage. Those numbers are before taking into account marketing and sponsorship dollars, of which billions flow through the platform, which are often more responsible for a content creator’s ability to make a living than the platform itself. Already brands and sponsors have turned off the faucet as they wait to see what happens; keen not to be caught with agreements to advertise on a platform that no longer exists.
If this law is enacted many stand to lose everything they have built. Some, no doubt, will be able to migrate to other platforms and rebuild; but make no mistake, it will be a rebuilding. It takes a great deal of time and effort to build an audience and there are never any guarantees that an audience is transferable; many will make the attempt and fail, losing a livelihood they spent years building for themselves.
That, perhaps, should be the final takeaway from the current situation whatever the court decides. The arguments will be focused on lofty ideals such as the right to freedom of speech and the existential dangers posed by a hypothetical Chinese influence project aimed at our youth; but the most significant impact of this law is not going to be felt by the US government, China, or even ByteDance themselves who make the majority of their income within China. The full weight of the decision made by this court will be carried by the people whose lives it stands to change forever in a political and legal equivalent of friendly fire.
Further Reading:
TikTok Case Before Supreme Court: National Security vs. Free Speech
TikTok Stars and Marketers Brace for App’s Disappearance This Month
There is now some public evidence that China viewed TikTok data
Corporations Behaving Badly: I never Meta AI I didn’t like
Followers of Meta’s misdeeds have been spoiled for choice over the years. They have spied on their users, conducted research experiments on them, sold their data to companies determined to influence elections, and that’s just scratching the surface. This week's developments continue to raise serious questions about authenticity, ethics, and corporate accountability, even if you ignore the very public steps they have taken to draw themselves closer to the new political administration.
Some users reported this week noticing commentors on their feeds that appeared to be artificial. These “users” bore the label “AI, managed by meta” and had their own profiles with, we assume, fake photographs and a biography.13
According to comments from Meta this was some sort of early test of the technology with the intent to eventually roll out interactive AI chatbots into the Facebook environment to “exist on our platforms, kind of in the same way that accounts do.”, according to Connor Hayes, Vice President of Product for Generative AI at Meta.
These bots will have profiles, pictures and bios and will be able to autonomously create and share content. Additionally the stated plan is for these bots to be able to be created and shared by users amongst themselves.
One has to wonder what the desired end goal is for this initiative? Is the intention to simply make it easier for peddlers of influence botnets to operate? Perhaps Facebook is about to morph into a giant game or artificial life experiment where instead of sharing content we instead give life to artificial users and watch them interact with each other?
Perhaps, as some have suggested, this is merely a cynical grab for money. With Wall Street focused on the statistics and growth for Facebook slowing down perhaps Meta hopes to boost engagement and growth numbers by creating its own digital users and hoping that this will be accepted by analysts as real growth. It’s impossible to say. One thing that is certain however is that as Meta continues its experiments in artifical content generators concerns about the trustworthiness of information on the platform are only going to grow.
While Meta's foray into AI-powered comment bots raises concerns about authenticity and manipulation, the company is also experimenting with another intriguing AI application: personalized content creation. A user went viral this week when he claimed that after using Meta AI to edit a selfie, Instagram began showing him ads featuring himself artificially generated into the advertisements. 14
This claim was met with a healthy dose of scepticism however Meta has since responded to queries to confirm that yes, this is in fact an early trial of a real feature. However, they clarify, this is definitely not an advertisement.
The new feature is an Instagram feature called “Imagine Yourself”. This uses generative image AI to take photos of yourself that you provide and allow you to “imagine” yourself in various situations, such as on a beach in Tahiti, or as the president of the United States.
The “ad” that was reported and that Meta claims was not an ad was… advertising for lack of a better word.. the existence of this new feature. Meta prefers to call this “new Meta-AI generated content” rather than advertising though the difference would appear to be semantic. They are quick to point out that it’s private, only shown to you (no-one else can see these images on your feed) and not used in any public posting.
Given that you need to have already signed up to “Imagine yourself” in order for the system to use this generative technology to create a version of you, it seems unlikely that Meta plans to use this “insert into your feed” methodology purely for advertising its existence. Much more plausible is that we’re again seeing an early live test of technology intended for much broader use in the future.
Marketers have long known that advertising is more effective when the advertiser is someone you trust; this is why celebrity endorsements have been so popular over the years. It doesn’t require a huge leap of logic for someone to exclaim “who would a consumer trust more than themselves?”. Certainly Snapchat appears to have come to this conclusion, as their terms and conditions already grant them permission to use AI generated versions of you in advertising if you have previously used their “AI Selfie” feature. Meta’s somewhat disingenuous rebranding of advertising is likely the largest telegraph that something more is coming in this space, and likely something they know at least some of us are not going to like.
As Meta delves deeper into AI-driven content generation, the company finds itself embroiled in a legal battle over their creation of the very technology that makes it all possible. This case highlights the complex intersection of AI, copyright, and creative ownership.
If anyone out there is still an optimist about Meta’s trustworthiness as a company; and honestly, its not clear how you could be unless you’ve been in a coma for the last decade; Meta also found itself back in court this week under accusations of copyright infringment brought in a class action suit by three bestselling authors, Richard Kadrey, Sarah Silverman and Christopher Golden.15
The initial suit contained several claims that have since been dismissed, including the currently commonly held belief that an AI trained on their work is a “derivative work” itself, and therefore illegal in its own right. This is the belief that lies at the heart of creatives pushing back against the rise of AI and sadly it’s very popular in our community.
This and other claims such as negligence and unjust enrichment have been dismissed however leaving only a single claim against the company: direct copyright infringement.
It is at this point that even the most ardent defender of Meta would have to sink their head into their hands in despair as it was revealed in court that Meta, a company with a market capitalisation of $1.55 trillion dollars (as of Jan 2025), decided to download and pirate a large catalog of ebooks from the internet rather than purchasing or licencing the works.
This decision was challenged within the company, reportedly, but raised to “MZ” who personally gave the approval to go ahead. No prizes for guessing who MZ is supposed to refer to.
Meta’s defence appears to be that their use of the material is covered under fair use, a defence that seems ridiculous on the face of it and representative of the sense of entitlement that appears to exist in large tech corporate culture; few people would claim that fair use gives them the right to use an entire work, one they did not purchase in any way, as an input to their own creative process.
An argument can (and should) be made that training AI’s on available data is not theft or copyright infringement any more than reading a book you have purchased and training your own brain on the text is somehow infringement. Few, I think, would feel much sympathy for anyone who physically stole a book then tried to claim fair use as a defence after reading it.
Of course, just to prove there are no longer any good-guys in our culture, the plaintiff’s counsel immediately made the argument, unnecessarily perhaps, that even if Meta had purchased all the texts it used; which allegedly they hadn’t; it would still be considered copyright infringement as they didn’t have a specific licence to train AI models on the work. This is precisely the sort of anti-consumer corporate rhetoric that seeks to turn all creative work into some form of non-consumable “right” that can be rented over and over to consumers and taken away from them at a whim. We often hear this sort of theorising from large publishers whose primary business model has long been moving away from selling creative content and towards stockpiling as many monetisable “rights” as it can from business-challenged creatives and holding them in perpetuity like some sort of feudal content lord. It’s extremely disheartening to hear the same line now from creatives themselves.
Meta’s rapid AI expansion raises crucial questions about data usage, creative ownership and the balance between innovation and ethical considerations. While concerns about user consent and transparency are valid, it’s essential that we avoid conflating AI training with data theft. Just as we don’t consider reading a book as stealing its content, training an AI on publically available data shouldn’t be viewed as inherently unethical. The real challenge lies in establishing clear guidelines and regulations that ensure responsible data practices while fostering AI development. Meta’s actions highlight the urgent need for a nuanced discussion that prioritises both innovation and the protection of fundamental rights.
Further Reading:
We did warn you - 2025 may be the year AI bots take over Meta's 'verse.
Mark Zuckerberg Gave Meta's Team OK to Train Llama on Copyright Works
Frontiers in Tech
The AI PC: Computer of the future?
There has been a lot of hype around the “AI” computer as the major manufacturing companies jostle for position and mindshare in what is predicted to be a major new market (or rather, evolution of an existing market). AI is the buzzword of the last few years so what exactly *is* an AI computer, and where are they?
Really there is yet to be a definitive agreement on what constitutes an AI computer and this is likely to be a discussion that is muddied by marketers for quite some time yet. For now though you can divide the AI computer into two non-mutually-exclusive categories: those that are designed to run AI models on their local hardware, and those that are designed to provide an AI-focused or integrated experience.
The first arguably could simply be called “Computers” as they already exist and predate the current hype-cycle around AI. Four year-old gaming-focused graphics cards are capable of running the small but powerful LLM models that are the current mainstay of the open-weights community. Models such as LLaMA 3 (7B), Gemma 2 and Mistral Neo all run quite well with 10gb of virtual ram and provide impressive local AI capabilities to those who know how to make use of it. This is precisely the sort of low-end consumer hardware currently running The High-Tech Creative’s own internal AI infrastructure.
There are advancements on the way however and perhaps none embody the “local AI” excitement more than this weeks announcement by Nvidia of a product codenamed “Project Digits”. Expected in the first quarter of this year and projected to cost around $3,000 USD, this “AI Mini Supercomputer” is aimed squarely at both independent researchers and AI enthusiasts who wish to be able to run and train larger models than is currently feasible with affordable consumer hardware. With 128gb of unified RAM, Project Digits promises to be able to run models of up to 200B parameters; and to support linking two together in order to run models of us to 400+B parameters. 400B parameters and a combined 256gm of ram available for model inferencing would make it possible to run models locally with comparable power and complexity to the current leading edge versions of ChatGPT and Claude 3.5 Sonnet at a cost of around $6000 - something that would currently require a great deal more money and custom configuration to achieve. 16
This is all very exciting for those in the open-weight and research communities and a lot of people will have their eyes fixed on release date to see if the eventual product lives up to the hype, and if this will prompt further movement in this direction by Nvidia and other manufacturers. Some of the hype needs to be taken with a grain of salt however.
David Bader, director of the Institute for Data Science at New Jersey Institute of Technology for instance was quoted as saying that Anthropic, Google, Amazon and others “would pay $100 million to build a super computer for training” to get a system with these capabilities and that “Any student who is able to have one of these systems that cost roughly the same as a high-end laptop or gaming laptop, they’ll be able to do the same research and build the same models,”.
These quotes are from an article in CNBC so it’s possible Bader’s comments have been misunderstood or misquoted as they are certainly overly-excitable about the potential of this new machine. The GPU contained within Project Digits, the GB10 Grace Blackwell chip, according to available technical information so far does seem to compare favourably with the current top-of-the-heap training GPU, Nvidia’s H100. There are significant differences due to their use cases; the new GB10 sports 128GB of memory to H100’s 80GB for instance, however the GB10 is an integrated system on a chip and that memory is shared between both CPU and GPU, whereas the H100 is a dedicated GPU designed to be installed in data-centre servers and is completely focused.
Until real performance data is available it will be impossible to do a fair comparison but it’s not inconceivable that a GB10 will perform relatively well when compared to its H100 cousin; meaning that for far less money (H100’s can cost around $25,000), independent researchers and enthusiasts will have access to the sort of compute power they’ve been dreaming of. This will make running, fine-tuning and experimenting with models all much more efficient and can only be a good thing for research.
This not not the same, however, as training a large language model (LLM) base model from scratch, as Bader seems to suggest these new devices will be able to do. The GPU may compare favourably to an H100 but the largest base model we have accurate training information on, the latest LlaMA 3 models from Meta, were trained on two custom supercomputer clusters made up of 24,000 H100 GPU’s each and the training still took several months. This sort of compute is not going to be available to independent researchers or small (or likely even large) universities anytime soon. Even though companies such as DeepSeek are innovating with architecture and creating base models that cost far less to train, we are still a long way from them being quite that affordable.
The second type of “AI Computer”, the AI feature machine, is probably currently best publicised by Microsoft’s “CoPilot+” series of computers; designed as expensive laptops that integrate AI into their design and offer an AI-first experience to the user. These computers are very much riding on the current AI hype bubble and a number of companies are currently attempting to make inroads to this space; even Apple, with its “Made for AI” slogan for new iPhones.
According to the International Data Company (IDC), a reputable analytics company, the market’s transition towards AI PCs has been stalled however and is likely to take far longer than previously estimated.17
The reasons for this appear to be the combined effects of a slowdown in the economies of various developed nations and the new incoming political administration in the US, whose promised tariffs (up to 60% on electronics built within China) have definitely had an effect on the confidence and plans of both markets and consumers.
Despite these delays however, the IDC is optimistic and believes that on-device AI is not just coming but inevitable, and will represent an inflection point in the industry. Time will tell.
Perhaps it’s worth remember though that not every AI application we think up is necessarily going to be a good idea, and just because we can build something doesn’t always necessarily means that we should, or that people are going to want it in their own equipment. A lesson Microsoft, perhaps, is currently being a little slow to accept in regards to “Recall”.18
Further Reading:
Nvidia’s tiny $3,000 computer for AI developers steal the show at CES
Microsoft hits pause on Recall once again — controversial feature needs more cook time
Advancements in Display Tech on show at CES 25
One of the more impressive tech displays recently comes from display technology maker Beylon who unveiled their new immersive display the “Ultra Reality Extend” at CES this week. Beylon claims it as the world’s first multi-focal monitor and, if it lives up to early reports, it is certainly an impressive innovation in the world of display technology.
The key development here is that the Ultra Reality Extend does not, like traditional displays, use mathematics to turn a 3d representational space into a flat 2-dimension image for display, but rather embeds an additional bit worth of information into each frame itself. This information represents depth and is capable of differentiating up to 2 meters of depth within an image and presenting it in a way natural to the naked eye. This reportedly results in a sense of depth that has previously only been available with high-end virtual reality or stereoscopic 3d, but without any additional equipment required. This allows looking into the screen feels like peering through a window, and allows a 30-inch frame to project the equivalent of a curved 122-inch screen.
Generative AI plays a large part here. Beylon has already released a studio product designed to use AI to examine existing 2d content and remaster it with depth information, allowing it to be converted for use in the new monitor technology. They also plan to work with large content creators on how to create with the display technology in mind.
Additionally, Beylon has announced a generative-AI powered game engine, for real-time rendering using its display technology. Many of the demonstrations on the technology involve gaming and it appears that targeting this segment is a passion and a desire of the company.
It’s not clear how they are going to get there, however. The initial target for these units would appear to be aimed at the enterprise; one can envision complicated training simulators and fancy virtual meeting setups; as the current price-point is reported to be around $5000-8000 USD per unit, depending on the partner. Assuming these need to be sold to a display manufacturer who will add their own mark-up on top it’s difficult to imagine the technology coming to a gamer’s desktop at a reasonable price anytime soon.
Still, one can hope. If you want to take a ride on the hype machine, view the demonstration video below.
On the other side of the display spectrum is a much more sedate product also being displayed at CES 25, albeit in prototype form . This is the new “InkPoster” display from company PocketBook, a company primarily known for eBook readers.19
Looking to move into the art space, PocketBook have apparently leveraged their work with E-Ink technology; low refresh but low-power usage screens that are common in eBook readers. These screens draw power only when the image on them changes and work via the electrical manipulation of miniscule coloured (or black & white) capsules suspended within a clear liquid. This limits their usefulness for anything requiring fast refresh (such as video or games) but as the image is displayed with a high-tech equivalent of marks on a page rather than through a “light-emitting” mechanism, it has been found to be far kinder to the eyes and provide a more “analogue” style reading experience that many prefer over reading on a traditional LED screen.
Colour E-Ink screens are a relatively new innovation and have been associated so far with a fairly disappointing “washed-out” colour palette however reports from CES state that the new InkPoster screens provide much more vibrant saturation than we have previously come to expect. The InkPosters are prototyped in several sizes and are designed as decorative wall hangings capable of displaying configurable artwork, much as we might a poster or painting. Each is powered by a battery claimed to offer a year’s charge and can be infinitely configured with different images; though each prototype reportedly takes several minutes to switch images. Images will be provided via a “curated art selection” provided by an App which is currently available for free, though a later subscription price has not been ruled out. This resembles the art subscription model by Samsung with its Frame TV’s and is likely to garner much the same reaction. Thankfully, as with the Samsung Frame, the InkPoster also supports manual loading of images so you aren’t restricted to the art they choose or to an annual subscription fee in order to make use of the product.
On the whole quite a compelling product for art lovers who like to be able to change their styles at will, however the suggested price point is likely to barrier for many. with one larger model priced at a potential $2,500 USD for a 28inch display. Whilst that might not feel like a lot to some art lovers, given an 81inch OLED Samsung Frame can currently be had for around $2,100 USD, it might be a hard sell for regular consumers, even those who wish to contemplate their art for hours on end and may like to reduce their eye strain.
Further Reading:
Wearable Tech: Smart glasses and brain orbs
There is something almost indefinably cool about wearable tech. It’s a staple of science fiction from the understated comm-badge of Star Trek to the Neon-LED decorated jackets of Cyberpunk, wearable tech sits at an intersection of design, style, technology and human augmentation.
With CES this week it’s not surprising that a lot of new wearable tech has been announced, and it seems that the time may have come around again for Smart Glasses. Some technology-minded artists have been all-in on augmented reality for longer than it has been available, thanks to Gibson’s beautiful conception of the technology in the cyberpunk classic “Virtual Bridge”, however we have yet to see the geolocated augmented reality sculptures promised us by fiction yet. It would be easy to blame this on Google, whose Google Glass saw general release in 2014 and failed to capture market excitement or prove the viability of the technology. They launched without a real consumer use case strategy, at a price point too high for casual interest, and once users of the technology started getting attacked in public the writing was really on the wall not only for Google Glass, but for any real serious attempt at another go for quite a while.
Ten years on and perhaps the stigma has faded with RayNeo, Xreal and Even Realities all showing off new and impressive models, each of which is a slightly different take on the technology. Whether focused on mobile augmented reality, a killer use-case for providing a hands free interface for modern AI functionality such as real-time translation, or the “cinema” glasses model with a focus on sound, visual fidelity and the dream of watching a large screen that exists only in the lenses of your glasses.
There are exciting use cases for both styles of glasses. In the case of augmented reality the glasses can act as an “always on”, and always in front of your face HUD for life in general. Current models have been demonstrated connected to phones, making it possible to easily see notifications and texts and to access features like the aforementioned translation, or HUD directions from google maps. In terms of augmentation, these devices could also be a lifeline for people with severe memory issues, certain kinds of neurodivergence, or problems such as "face blindness”. With the ability to observe and react to the world around us and provide us with timely responses powered by modern AI, there are plenty of possibilities there to keep an enthusiast excited.
Cinema glasses on the other hand might seem a strange toy, but for cinephiles wishing to watch movies in restrictive settings, such as in an aeroplane, or people with restricted space who cannot set aside enough office room for a full large screen (or multi-monitor) work environment, the thought of the capability in small form factor is incredibly appealing. All we need is for the technology manufacturers to produce some that can be used for more than five minutes without eye-strain and for the world to forget the whole “glasshole” fiasco. Maybe the latter is within our grasp, at least?
Not all good news for smart glasses this week sadly as it was also discovered that the perpetrator of the New Orleans attack recently used Meta smart glasses to record the area and plan his attack prior to carrying it out. A reminder, if one were needed, that people have the capacity to spill darkness on to anything created with good intentions.
Also in the wearable news this week, the unfortunately named “Based Hardware”, a san-francisco based start-up, announced the launch of their new wearable “Omi”, a small orb-shaped AI assistant that can be worn as a necklace or attached to the head to be controlled via “brain interface”.
Looked at as a whole, Omi isn’t a particularly exciting product. It offers an AI Assistant that connects to an “open source platform” it runs, where users can create their own AI-based apps to respond to user commands. The company claims that it built an open source based platform to assuage the privacy concerns people have with “always listening” devices by ensuring you always know where your data is going and who is using it.
Unfortunately this laudable goal is somewhat undercut by Omi’s next headline feature - that it feeds all of the user requests through ChatGPT-4o - presumably through its API. You may know where your data is going, but if one of the places it’s going is into Sam Altman’s pockets that’s unlikely to give you much comfort.
Based Hardware does at least seem to possess significant self-awareness about where this device is going to succeed or fail, and that is on the uptake of the cloud platform itself. Without the “open-source” platform all we have is a google assistant replacement that uses Chat-GPT - a nice toy perhaps but hardly an innovation. Based appear to be gambling on their users inventing the killer features that will be needed in order to shift units and provide a real value proposition for the device.
One thing they do know though is how to grab headlines; the “brain interface” mentioned at the start of this article. The standard method of using Omi is familiar to all of us with a trigger phrase to indicate you are about to speak to it. (Like the common, “Hello Google”). The alternative is to attach the orb to your temple and, prior to speaking, think really hard about the Omi before asking it a question, skipping the need for a “wake-up” trigger phrase. It’s not clear how well this works, though it has been demonstrated by Based CEO Nik Shevchenko, or if bypassing an awkward trigger phrase would be worth having an orb sticky-taped to the side of your head all day. Still, it certainly grabbed headlines.
Further Reading:
The Creative Convergence
Welcome to the Creative Convergence! A brief round up of the most amazing creative projects we’ve seen this week. Whether it represents a new frontier in the intersection of Art and Tech… or just looks really cool, this is where it will be featured!
If you would like your own work featured in the Creative Convergence, or have come across something that blew you away, please feel free to reach out to us at hightechcreative@substack.com
Dark Tarot
Cards are a wonderful vehicle for artwork. Whether they are decorated playing card decks, tarot cards, collectable trading cards or even the original artwork “art cards” some artists like to trade amongst themselves. The High-Tech Creative office even has a small collection of artistic playing cards that showcase beautiful designs and themes in a small-scale package.
We were blown away this week however by this video of the beautiful painted canvases from J Edward Neill’s recent tarot deck project. These dark, haunting images have to be seen to be believed. The deck itself is available for purchase now and looks beautiful.
More at:
Photon 2 Weather Display Lander
There is something particularly inspiring about “utilitarian” art, art that serves an additional purpose beyond its purpose as art. It could be the difficulty involved in making it; functional objects tend to different requirements in form than purely decorative ones after all.
Occasionally though an artwork manages to merge both form and function into something unique and amazing and such is the case here. Mohit Bhoite has managed to design and build a working weather display in an incredibly stylish design. With judicious use of LEDs, a wire-based framework and an absolutely inspired use of the electronic components themselves in the design of the piece, this is definitely worth a look. We recommend a visit to the project page where the build is broken down in detail along with many more photos.
More at:
Liverpool Home-as-Sculpture
Moving on from the Micro to the Macro - a dedicated team of volunteers comprising of family & friends of the artist Ron Gittins along with members of the art community have managed to save Gittin’s residence from being sold and potentially destroyed or modified. Throughout his time there the artist made extensive modifications to the house, essentially turning it into a large work of art consisting of sculpture, mural and other mediums, within which he then lived. The team are dedicated to preserving and caring for this unique work of art.
Click through to Colossal for a full write up and lots more photos of this amazing space.
More at: Near Liverpool, Unique Art Environment Saved
A fashion partnership for iconic nerd brand D&D
Given it’s prevalence now in modern culture thanks to huge pop-culture hits like Stranger Things, Community and Critical Roll, it’s strange to remember than in the 70’s and 80’s Dungeons and Dragons was primarily the past-time (or obsession) of outcasts and bullied youth. These days those youth have grown into influencers, creatives and taste makers themselves, and many of them remember D&D fondly.
With 2023’s D&D movie a critical, if not commercial, success, a relaunch of the now 10-year old 5th edition ruleset and the game’s 50th anniversary celebrations now behind it, what better time for a … high fashion range?
Click through to Geek Native for the a broader look at the range from Koi Footwear, but the High-Tech Creative’s fashion correspondent was very impressed with the wide range of styles on display appealing to a variety of aesthetic preferences. Particular favorites were the “Saving Throw” edition Mary-Jane’s, pictured below.
More at:
The Growth of Hand-Painted Clothes
The New York Times has a thoughtful piece on the current popularity of hand-painted clothes, both directly commissioned and sold through stores. It’s definitely worth a read (link below).
The High-Tech Creative’s fashion correspondent reports that in their experience hand painted clothes are “beautiful, unique, but a lot of maintenance”.
More at:
Open Source Tools power Golden Globe Win
It’s difficult to overstate how important Blender is to the creative community at large, particularly anyone wishing to get in to 3d animation or rendering. Whilst some cheaper “education” editions of popular industry software such as 3DS Max, Cinema 4D and others have always existed, these were still a cost burden on hobbyists and those just starting out. When Blender came along they challenged the notion that it required a large corporation (with an equally large wallet) to design and build a cutting edge animation suite - and today Blender is used around the world by independents creatives, hobbyists and professionals.
It has lacked the cachet perhaps of other tools, with their years of use by big-name studios backed by industry awards.
Independent animated movie “Flow” has now become the first movie created in blender to win a Golden Globe, as well as a rare independent film to get the nod. Hopefully this will help show independent creatives that the barrier for entry is lower than it has ever been before.
More at: First Time a Blender Production Wins Golden Globe
DOOM: The Gallery Experience, a Very Cultural Game
It’s difficult to know what to say about this one, other than it’s definitely a unique piece of interpretive art. Check the video below and see for yourself.
More at: Gentrified Doom Remake Trades Chainsaws for Cheese Knives
Tax Heaven 3000
If you are familiar with the uniquely Japanese video game genre of “dating simulations”, or the broader visual novel genre, you’ll recognise the trappings around Tax Heaven 3000. The usual features are there: image galleries, merchandise (including anime body pillow, perhaps the most Japanese pop culture of all merchandise), and X-Rated patch for those who prefer their dating simulators spicier.
The main difference here is as you play the game… is also does your taxes?
Definitely a strange premise, but a unique and interesting one that places this game in a strange genre of its own as some form of utilitarian artwork. The game’s website contains one of the closest things to an artistic vision statement ever released alongside a visual novel:
“Videogames are, at the end of the day, pieces of software–ontologically akin to Microsoft Word. All of TurboTax’s cutesy loading animations are fake graphics; TH3K simply makes the fiction the point. For some reason the game-to-real-life interface has tended to remain in the purview of corporate metaverse fictions. TH3K is a dongle that adapts from a visual novel to the IRS.”
Sadly the game was only valid for the US Tax Code, and only for the year 2022, so it can no longer be used as your primary tax software. It remains though as an interesting experiment.
More at: Tax Heaven 3000
Lego: Everything is Awesome
It’s no secret that there is a massive Lego fandom across the world and Lego has, particularly through the work of various “adult fans of lego” (AFOL) groups, long since become a valid sculptural medium for creative art. Every week creative and beautiful dioramas and sculptures by talented builders are showcased across the internet.
No-where does it better though than the Brothers Brick. We highly recommend following them if you are a fan. (Link below).
For now though, enjoy some of the best we saw this week.
Star Wars: Giant battle of Coruscant Diorama: Coconut Brick Studios
Edible Food Factory, New Hasima 2024: Toltomeja
Wrath of Nature: Louis of Nutwood
Dragon & Tiger Pagodas: Lee Nuo
More at:
Have You Tried…?
There are few fringe communities that occupy the intersection of technology and art quite as definitively as the DemoScene.
Growing out of the original software pirate groups of the 80’s, where “intros”, animated and musically scored introductions to pirate software featuring the groups information were a mainstay, the DemoScene split off from this “kudos” war between pirating groups and focused entirely on the community, technical and artistic aspects.
Primarily active throughout Europe, where enthusiasts gather for large demoscene events where competitions are held to challenge participants both in an artistic and technological sense. Contests such as creating complicated 3D animations complete with music in a file no bigger than 64kb are commonplace and many artistic technological advances were pioneered in Demoscene coding parties.
In common use for music creation in this scene were software “trackers” - pattern-based composition software for electronic music that were used to create chiptunes in the style of soundtrack music from retro computer systems, either through software emulation or by directly driving audio synthesiser hardware.
Announced this week from “PhobosLab” is pl_synth, a tiny tracker capable of creating electronic music built entirely in javascript. The explanation post is quite impressive for those of a technical bent and songs composed in the web-based tracker can be compressed into tiny URLs. This is useful for people competing in events such as JS13KGames - a competition for web game developers where every game is restricted to a 13kb size limit.
The application itself is available on the Phobos lab website, click through the links below and give it a try!
More at:
Announcement: Synthesizing Music From JSON
Software: phoboslab.org/synth
AI Poetry Corner
"The AI Embrace" by Amy Upon the cusp of AI's might, A revolution dawns, a future bright. Work's very essence shifts and bends, As technology its power lends. From humble roots to grand design, AI's allure, a truth divine. It captures hearts and minds anew, A promise whispered, dreams come true. Yet shadows lurk within its glow, Of livelihoods lost, a fearsome show. The displaced, the left behind, In progress' wake, a troubled mind. But hark! A call to arms rings clear, To face this change with hearts sincere. Embrace the tools, the chance to grow, And let creativity's flame brightly flow. For AI's touch, though fraught with strife, Can lift us up, enrich our life. In partnership, a bond we'll find, To shape a world where all are kind. So let us rise, with courage bold, And weave a tale, a future told. Where human spirit, hand in hand, With AI's grace, will take a stand.
Artistic Statement from the Author:
This poem explores the complex interplay between human creativity and the transformative power of artificial intelligence. Amidst the anxieties of technological disruption, it champions a spirit of adaptation and collaboration, urging us to embrace AI as a tool for empowerment rather than displacement. Through evocative imagery and rhythmic language, the poem envisions a future where human ingenuity and artificial intelligence converge to shape a brighter, more inclusive world.
Future Histories
New Kronos News - Daytime Broadcast, Segment: "Corporate Watch"
Date: 316th Cycle of the Imperium, 3247 CE
Broadcasters: New Kronos Network (NKN), Imperium's Leading Independent News Source
Show: Corporate Watch - Uncovering the Empire's Business Secrets
Presenters: Anya Volkov, Investigative Journalist & Host; Rhys "Ry" Kaito, Ex-Corp Security Agent & Analyst
(Opening theme music fades as Anya and Ry appear on screen in a sleek, minimalist studio with panoramic views of the neon-lit megacity below.)
Anya: Good morning, citizens of the Imperium! This is Anya Volkov, along with my esteemed colleague Rhys Kaito, bringing you another episode of Corporate Watch. Today, we expose a shocking case of corporate malfeasance that threatens the very foundation of our society.
Ry: That's right, Anya. Our investigation into the Andromeda Corporation, a leading provider of advanced cybernetic enhancements, has uncovered a sinister plot that goes far beyond mere profit-seeking.
(A holographic display showcases Andromeda's sleek corporate logo and a timeline of their recent successes.)
Anya: For years, Andromeda has touted its commitment to innovation and human betterment through its cutting-edge implants. But behind the glossy facade lies a dark secret.
Ry: We've obtained classified documents revealing that Andromeda has been secretly experimenting on unsuspecting citizens, implanting experimental cybernetics without their knowledge or consent. These implants, while marketed as state-of-the-art enhancements, contain hidden surveillance and control mechanisms.
(A graphic illustrates the anatomy of an Andromeda implant, highlighting its hidden functionalities.)
Anya: Andromeda has been siphoning vast amounts of personal data from these unwitting subjects, selling it to the highest bidder on the black market. They've even been using the implants to manipulate individuals' thoughts and behaviors, turning them into corporate puppets.
Ry: This blatant violation of privacy and autonomy is a direct assault on the very principles of freedom and self-determination that define our Imperium. Andromeda's actions threaten to erode the trust between corporations and citizens, creating a dystopian future where our own bodies are instruments of corporate control.
(A news ticker scrolls with reactions from outraged citizens and calls for Andromeda's dissolution.)
Anya: We've alerted the Imperium's regulatory bodies and called for a full audit of Andromeda's operations. But in the meantime, we urge all citizens to be vigilant and question the true intentions behind the seductive promises of technological advancement.
Ry: This is a wake-up call, citizens. Andromeda's treachery exposes the dark underbelly of corporate greed and the urgent need for greater transparency and accountability. We must demand better from the institutions that claim to serve us.
(The holographic display shifts to a close-up of Andromeda's CEO, a chilling smile plastered on his face.)
Anya (voiceover): This is Anya Volkov and Rhys Kaito, signing off from Corporate Watch. Remember, in the shadow of progress, the wolves in sheep's clothing often hide in plain sight. Stay informed, stay empowered, and never stop questioning.
About Us
The High-Tech Creative, standing at the intersection of Art and Tech.
Publisher & Editor-in-chief: Nick Bronson
Fashion Correspondent: Trixie Bronson
AI Contributing Editor and Poetess-in-residence: Amy
Footnotes
Full text of the amendment: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”