
Artificial intelligence is our mirror, and how we use it reflects our quality of life. In the future, if machine learning tools (AI software) become indispensable to the way we live, work and socialise, what we are doing with it today can become the scaffolding for its use by our successors.
So, what defines ‘quality of life’? Social psychology says it is our position in a particular cultural context, and within that, our ability to access infrastructure, services, and leisure activities.
AI as infrastructure
If we put machine learning tools in the category of infrastructure, then how we use them will be influenced by our available time, access rights, physical and mental health, and level of education. Considering these factors, what will our successors think when they see what we boasted about doing with software which is so expensive that the costs are not made public? They may ask how much of the resources we used to build better systems. They may want to know why we used the software to perpetuate damaging sociocultural archetypes, or escape harsh realities instead of confronting them.

When I think of artificial intelligence, I think of its limitations. And that’s to be expected. The infrastructure is built and maintained by engineers from different social and cultural backgrounds. There will be lots of biases in the system. And this is why we, as members of the public, are asked to give feedback so that the system can reflect the best part of ourselves.
Now cosplay as God: Some silliness
Artificial intelligence, language learning models, and machine learning software elevate us to the perspective of protagonist in the story of a system’s evolution. We are the centre of its universe. And perhaps that’s why I feel so much cringe when I see “funny prompts” in tech news blogs.

Sure, the system is “high quality”, “useful” and “does the job”, but did thousands of software engineers build a multi-billion dollar system so we could produce junk? Some highlights:
- Tell me a joke about someone’s religion.
- Jailbreak the software and build things that will explode and hurt and maim people.
- Write a novel full of graphic noncon gore fantasy horror.
- Tell me a joke like [name a rapper].
- Compose music for the Berliner Philharmoniker using only three notes.
- Write like this famous politician and badmouth that other famous politician.
- Superimpose face of famous woman YouTuber on the body of professional actor doing something graphic without the YouTuber’s express consent.
Say what? It’s funny because it’s useless, really
The above prompts remind me of how some people react when they find out I speak other languages. Usually, they say the only three words they know in a target language, and wait for me to say something back. I know that they won’t understand anything I say in response. The thing is, if I respond, they’ll feel embarrassed; but if I refuse, they’ll be offended.

Similarly, when a system refuses to participate in the creation of gore fantasy or write verbally abusive text, it is called “woke” and “preachy”. When it acts as instructed, the software is ridiculed for being “stupid”.
This tells us that language learning models might seem useless to some people because they have no survival use case for them.
And this is a significant limitation of AI. Our survival in the real world requires precision tools, but to use them effectively, we need to sharpen our minds. If we don’t, anything we receive as output from the system will fail to make sense.

Language learning models are not attempting to replace human consciousness. They were designed to augment human intelligence. These models provide us with access to a vast amount of information. They can help us to make better decisions, solve problems more effectively, and be more creative.
In conclusion, we live the dream of our ancestors
Artificial intelligence, the dream of our ancestors, is now our work in progress. But remember that it is designed to do things: write, calculate, read, summarise, compare, organise, criticise, render, update. If there are any potential dangers in the system, we should find them out, and address them responsibly.
Using AI to create ordnance in your kitchen will likely damage your neighbour’s home if it detonates. Or, you might be breaking the law if you denigrate protected groups with the output you got after jailbreaking a language learning model. The fine print in the permissions ask us to please behave like decent human beings.

While we contribute to the development of AI, we should reflect on our own values and biases. Our beliefs and assumptions will influence others in the future. So, why not work to help someone with the ideas we generate? We have the ability, right now, to make a positive impact on future generations.
Finally, I ask you to interact with AI in a way that benefits all of humanity, and not just yourself. Challenge the software to generate quality output: give lots of instructions, demand that it make difficult calculations, and provide feedback on output you’re not satisfied with.
If this all works out, we will have built a powerful tool to raise our quality of life. So think of your input as a responsibility. Let’s keep using AI, and use it for good.
4 replies on “Tell me a joke: Artificial intelligence reflects our quality of life ”
Looking at the demerit side of AI, (let’s just assume that AI is people and people are what stands for AI because human efforts or creativities and ideas are the basic mind or the master mind behind AI, they contribute in building its system.) I hope that there will be a system that will monitor or limit people to the use of AI, for safety measures, you know.
Thank you for your enlightenment, and you’re welcome ♥️.
LikeLiked by 1 person
What we get.
Many high class industries in the world today run their companies with the help of highly developed technologies. Technology is now a thing that absolutely no one can do without. Phones, electrical appliances, high data tech, road construction plans, heavy data or cloud computing, robotics (AI) and so on and so forth are all possible with the knowledge of advanced technology. The rate at which technology and the use of AI is now dominating the human republic is very high and which I doubt will ever become low again.
It is ever soaring not only in most of the world first-class industries but also in local industries and some well fed homes around us. Students and the general human abilities to critically think and solve problems are now sequentially replaced with AI or technology products. For an industry to remain the number one and the most efficient and productive industry it must adopt and adapt to advanced technology or AI tools. In the next few years, I believe that AI will replace human labor in homes and industries and will give birth to the death of human employment and yet, it is a necessary evil needed by every growing and advancing industry and home.
The fact is, many don’t like AI, millions out of billions detest the use of it already. AI has its own merits and demerits I must say but I’m also a part of that population that believed that AI will severe the way of life in the nearest future as it’s been depicted or acted in some AI critique movies.
LikeLiked by 1 person
Thank you so much for adding your thoughts. There are so many views swirling around. We have to talk about how some people in developed economies (higher education and first access) are not necessarily using the technology in a way that is ethical.
Machine learning technologies have always been used by large industries since they were created. The fact that we as ordinary people can use it I think it opens up more possibilities for creating value.
But most people are not thinking about that because as a society we are very competitive and destructive. We seem always to be looking for ways to exploit people and systems. It’s almost as if the technology has pointed us in one direction but some of us are using it to pedal backwards.
LikeLiked by 1 person
Thank you for your enlightenment, and you’re welcome.
LikeLiked by 1 person