DEV Community

Tambet Väli
Tambet Väli

Posted on

CoPilot is for entertainment

It states that CoPilot is for entertainment purposes only, the MS post I would like to comment.

In company where I am, able to watch it knowing the aspects, AI made an estimation of income and people could not believe it, cutting it off in something like a gold rush - money was too important to notice that for a few thousand euros of input, AI expectancy is enough to prove only that this input is brought back; it was magnitudes larger per year, so even if AI error was in 3 it/in magnitudes, where it is iteration and in is input, one-time fee, in this unit the error must be like 3 magnitudes in optimistic direction to make our nerves tick. As a programmer, I am not interested in psychological needs of others, such as nervous system activity when they see money, which make their mental muscle tick and yield so much that they just cancel it; it's more interesting: substantially, there was good enough average estimation to conclude that with some error, an income would be made and whether it's a bright future as well, is an interesting adventure as soon as the real cost is covered. This poker-like discussion was way too much, as well as the voices people do as they shake their hands and fail buttons in poker, just bringing the lower level of clarity - while it's illegal to play poker well, remember the cards and I was trained for this when I was child because I studied magic and it's mostly about such tricks; it's still seen as trick in real poker field and not allowed: it's also funny if you cannot play this real game well, because the public wants to be "equal" and decide in where it's "shared" and "positioned".

You do not have to believe in exact nuanced story, but rather the archetypes - it's a typical story in our society, appearing in eyes of rich, intelligent, historic genius or jesus christ, each time the income game is seen as poker rather than calm estimation. What turns a genius into an introspective autist in history: a small land, and there were magnitudes less of people, produced stable amount of genius especially in it's favourite fields, but now with so many people genius must be able to form clubs and societies, and be a valuable member of their own community: in culture, which appears, they are not autist. Still, the jewish people tend to create collective genius effects, and for other collectives: it's as dangerous as genius for people; collective genius almost inevitably will produce material money and weapons.

What is the result?

I saw the people can produce reframing bias: the "hallucination" of an AI, when it was taught on one framework, for similar language or shallow meaning, but asked or conditioned in another framework: based on selection of details, it will make estimations as if this was another framework or paradigm. Such misfit can also produce ethical errors: if domain is different, such as militaries behaviour with peaceful cityzens in war, but the initial domain was treating pigs in farms and business rules, then maybe bluffing to people about how they were treated would be the absolute answer based on business ethics and how we could hide pig killing from children or neighbours, even to fight with the loud sound which would reach nearby blocks: almost every day in a productive factory. YouTube was just providing a series of videos about that - I could not even understand, is this AI fake or where this is happening, and what is happening exactly, but they were told to provide for "luxury dogs" and a typical image of half-killed dog as production entry was displayed in main image, but not inside the video I could not see it. Finally I did not understand is this advertisement or against this process. But it's not interesting enough to study deeply, because in fact I do not organize factories currently and I do not know what constitutes an "idea" in this constituent. I am not fair believer in synthetical meat for example and not vegan or fan of soya meat. I do not know why I live in ecosystem, but mathematically it's hard to exclude, like a quantum law: I still live in ecosystem. But now bring this framework and discussion context to train an AI: what it would say about holocaust now? This is kind of, an example of any human hallucination: such domain misfits or drugs or spiritual experiences and aura seeing, none of them distort the actual image or cognition as a primary mean; complete dysfunction of brain areas based on damage or growth problem, or when it's extremely tired or sick, is producing rather the real distortion: in desert, under every condition at the same time, the actual thing you see wrong has huge number of such mistakes. Normally, we reframe things and if input domain and output domain, or random condition, is different, the normal brain activity will produce hallucinations without change. Philosophy is enough in these conditions, but trained spiritual ethics to control spirit and inner world in right direction and not conclude to external world, is rather useful for real hallucination: while your brain, such as dreams which are real hallucinations, can block you from consciousness that such thing is an artefact, symbolical, metaphorical or fantasy field whatever is the current synthesis, and in dreamtime the muscle control is limited which is real protection against such imagination realm to also take over the stdout channel if it's mocking the stdin with something definitely generated somehow, from something, now this is a safe hallucination. If you are sick, such as person dying in war and in addition to seeing death also physically damaged, tired and unable to heal up or rest properly in these conditions, fears and challenges of everybody, in movies they see hallucinations about what is going on, such as childhood memories, funny things which would, could happen with their body but do not, and hallucinations about solutions such as proper food but it disappears as you are awaken up by yourself, such as their woman or childhood dream or vacation from war with their dead family. Usually the muscle output is limited. Now in reframing hallucinations, our body is not ready because it seems straight go or suppose, and happens often in poker-like conditions of chance and money, or dreams such as women or not appearing similar to a criminal, some would fight to look like local saint or appear with women, better than they are. All this would produce a social hallucination, an infection syndrome, in those subjects and in actual reality behaviour which might reinduce the hallucination to it's producer; if nature or society counts this as money, it runs out and the thing appears like a witchcraft, a magic replaced by science: science of collapse, not a human ideal fairy tale which would be seen in working conditions.

In this case: human would produce worse ethics, but triggered by same synthetic output as an AI, which could be labelled as same "hallucination" in the same language; other AI errors, even in-productibility or inability create in some conditions, would be repeated by low intelligence in humans; they are not really able to resolve the conflicts of the rich, but sometimes the poor unintelligent person would think they won't have the chance, and rich are evil just because of money: this is extreme simplification in many problems we can see in business realities and hallucinations, where in material hallucination the money would move into the bankcrupcy or fall when seen otherwise, or there is unexpected surprise for someone who was initially rather stupid, but not in conditional change such as intelligent child making success of their father's fade-off money and company which seemed as a dead branch and was sold cheaply enough.

In something like counter-Turing test: AI ethics must be compared with humans; I have seen the same story repeated in some human "professionals", who could not answer the interdisciplinary challenge. Based on this test, on average line, different levels of AI in domains or general sphere would compete humans in same domain, and also less ethical intelligence vs. straight corruption and low motivation could be the winning factor of a general test, where sometimes human has more personal gain than AI would learn from patterns, even if AI repeats some kind of false scientific belief or public propaganda with less results.

Top comments (0)