Hyperbola and machine-learning
Within this article we describe not only our stance as system-project towards machine-learning via so-called larg language models (short-term used is LLM). So before we are going further into this topic, we surely need to define the wording for everyone reading and being therefore able to follow correct the upcoming points, notes and statements leading to the finalized conclusion of Hyperbola's stance against machine-learning.
What is a large language model?
A large language model is a computational model designed to perform natural language and therefore human-understandable processing tasks, especially language generation, using contextual relationships derived from a large set of training data. LLMs can generate, summarize, translate and parse text in a variety of contexts, and are the technological underpinning of modern chatbots. LLMs can accurately mimic natural language patterns because they are trained on collections of human-written text. Based on the model it is then also possible to take the initial query and user-question to generate text, audio, video, images or perform further automatizations.
What is the problem?
There are several perspectives from where this on-going usage of such models and services should be immediately stopped. First and foremost no mentioned model and therefore based machine-learning service itself can be called free from any kind of bias. Inaccurate training data can make resulting LLM's output less reliable, even worse also enable them to replicate clear lies, propaganda, hatred and harassment, pseudo-scientific arguments and results. The resulting and already on-going social upheavals are seen worldwide in our current times.
So when we talk about AI (as shortform), we mean the already described services that are currently very present in society: large data models that calculate future or new results from existing data sources. This leads to an even more severe point: Where is this form of large data coming from? And who is getting which kind of data, saving that where for what exact purpose and under what conditions for how long in time? Nobody has a clear answer to these ground questions, so we can clearly assume that any kind of training-data is unethical collected and not under common knowledge saved for now or later purposed evaluation and usage. This is under no way possible to be combined with any kind of data-protection, security and privacy of people. We talk here about technology that learns patterns based on an immense amount of data and produces new results using these patterns.
This technology inherently lends itself to exploitation by authoritarian and fascist forces. This isn't a technical flaw; the technology is simply and clear designed this way. An inherent characteristic is also seen within that these gigantic models necessitate a high degree of centralization. The infrastructure required for training and maintaining these models cannot be housed in any private basement and to be just straight forwarded: Anyone telling that it could be possible to host, train and further maintain such structures is clearly spreading a lie. The mentioned massive data-centers can only be operated centralized. So the infrastructure is politically advantageous for fascist policies precisely because it strongly favors authoritarian bundling of power and structuring itself that way.
Furthermore there is this already mentioned pattern recognition, which is always trained on past data. It therefore necessarily projects this past into the future and statistically standardizes this future into a white-patriarchal average. According to various theories, at the heart of fascism lies the idea that there is a truth, a deep, nationalistic, genuine life within society, which should be brought to fruition. And this idea of a rebirth of national strength is reflected in the technical principle of collecting massive amounts of data and being able to extrapolate the supposedly truth from this data, forming there a tomorrow with pure authoritarian orientation, no democratic values included and seeing any kind of existence just within the matter of purpose to serve this once for now and forever in existing - any kind of contradiction is under no circumstances allowed or even tolerated, would be therefore drastically punished.
Machine-learning is accelerating societal divisions, not only in media discourse, by undermining democratic debates with plausible-sounding outputs that lack any claim to truth. It is already devaluing work. People fear being replaced by automation, and unions are lowering their demands. At the same time, those who have invested in stocks are reaping even greater profits. Thus, economic inequality is growing even more due to machine-learning and resulting services. To be honest clear: What can a company and / or corporation contribute to society? Why should they even think about that while their foremost important perspective is just to create more profit and income? No company and corporation has a clear perspective on the society, the environment and the ecological diversity, we are so depending on as mankind.
As system-project we are confronted with direct consequences foremost on technological aspects and dimensions. But not that alone as we also need to care then when others just do no longer care about security and privacy itself. Especially because of Hyperbola's focus towards minimalistic solutions, being cared of in privacy and security. It's not the single search query on a service that brings us closer to fascism. It's the constraints that exist in many areas. For example, the fact that many people now have to do things faster and therefore use such processes. That they are in competition with others to produce better business results. Leaving then also the question: Why do we need on-going competition and not favor working together? Because that's foremost the point we are focussed on at Hyperbola: We need concrete ideas about a reality in which these technologies we are currently fighting against function differently and being either stopped for a better tomorrow or redefined - meanwhile to note that machine-learning is clearly not a technology being able to be redefined. So hatred against something does not bring us as society forward and ideas, values and principles are needed.
Outcome and decisions for Hyperbola
The Hyperbola-project decided to completely decline any kind of machine-learning. We do not accept documentation, source-code, images or any kind of other data created through services that way. We also clearly decline machine-based review-processes as we want software from human beings for human beings. We know that this may lead to slower development in a whole, but the reasoning for us is the trust-based model, for us, for the people, for the community and for everyone interested in an ethical, moral-based future and better tomorrow.