format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4230
    [1] => dobrochan
    [2] => 4230
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4231
    [1] => dobrochan
    [2] => 4231
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4232
    [1] => dobrochan
    [2] => 4232
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4233
    [1] => dobrochan
    [2] => 4233
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4234
    [1] => dobrochan
    [2] => 4234
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4235
    [1] => dobrochan
    [2] => 4235
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4241
    [1] => dobrochan
    [2] => 4241
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/424242
    [1] => dobrochan
    [2] => 424242
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/424242
    [1] => dobrochan
    [2] => 424242
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4246
    [1] => dobrochan
    [2] => 4246
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4247
    [1] => dobrochan
    [2] => 4247
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4249
    [1] => dobrochan
    [2] => 4249
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4251
    [1] => dobrochan
    [2] => 4251
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4252
    [1] => dobrochan
    [2] => 4252
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4253
    [1] => dobrochan
    [2] => 4253
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4255
    [1] => dobrochan
    [2] => 4255
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4264
    [1] => dobrochan
    [2] => 4264
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4265
    [1] => dobrochan
    [2] => 4265
    [3] => 
)
]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4266
    [1] => dobrochan
    [2] => 4266
    [3] => 
)
]
Singularity 2024-2025 - Доброчан - Endchan Magrathea
a random dobrochan banner

/dobrochan/ - Доброчан

Бункер доброчана для того, чтобы можно было пережидать всякие перетряски.


New Thread
X
Max 20 files0 B total
[New Thread]

[Index] [Catalog] [Banners] [Logs]

thumbnail of GWkSrpUa8AAD89Y.jpeg
thumbnail of GWkSrpUa8AAD89Y.jpeg
GWkSrpUa8AAD89Y jpeg
(170.48 KB, 2706x948)
Is there anything to discuss? The bubble is already cracking, even the most stubborn believers in the ai have already fallen.


>>/dobrochan/4231@4226
How many jigawatt clusters do not build, and the bubble will soon burst. It is urgent to sell news about virtual clusters to lochs, raise money for “ethical AI” or cut Osloeb sheikhs like Altman https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0

thumbnail of Screenshot 2024-09-12 at 2.27.07 PM.png
thumbnail of Screenshot 2024-09-12 at 2.27.07 PM.png
Screenshot... png
(217.78 KB, 1160x1386)
>>/dobrochan/4232@4226
It's about to burst! Investors are about to flee from the scammer Shmul Altman and the other Zuckerbrins da Sutzkevers! And the era of Calsonno-Aryan computing will come, without any Bogomerz neural networks!

Meanwhile, in reality.

OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA).

I predicted all this back in 2021. Suck your shit, subhuman.

thumbnail of Screenshot_20240913-210328~2.png
thumbnail of Screenshot_20240913-210328~2.png
Screenshot... png
(766.12 KB, 1080x2020)
>>/dobrochan/4233@4226
OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces)
Nine out of ten dentists choose Blendamed. "Blendamed" - and your many times-stained pants glow white again!

>>/dobrochan/4234@4226
Terence Tao, Fields Prize winner:

I experimented a bit with OpenAI’s new iteration of GPT, GPT-o1, which performs the initial stage of reasoning before launching the language model. It’s certainly a more powerful tool than previous versions, though it still struggles with the most challenging math research-level problems.

Here are some specific experiments (with a prototype version of the model I was given access to). In https://chattgpt.com/share/2ecd7b73-3607-46b3-b855-b29003333b87, I repeated an experiment from https://mathstodon.xyz/@tao/109948249160170335 in which I asked GPT to answer a vaguely formulated mathematical question that could be solved by determining a suitable theorem (Cramer's theorem) from the literature. Previously, the GPT might have mentioned some relevant concepts, but the details were hallucinated nonsense. This time, Kramer’s theorem was identified and a satisfactory answer was given. (1/3)

In https://chatgpt.com/share/94152e76-7511-4943-9d99-1118267f4b2b, I gave the new model a challenging complex analysis task (for which I had previously asked GPT4 to help write proof in https:/ /chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4). Here, the results were better than previous models, but still a bit disappointing: the new model could come to the right (and well-written) solution, *if* provide many clues and leading questions, but did not independently generate key conceptual ideas and made some non-trivial mistakes. The experience was roughly comparable to trying to advise a mediocre but not entirely incompetent graduate student. However, this was an improvement over previous models, whose capabilities were closer to a truly incompetent graduate student. It may take just one or two further iterations of enhanced capabilities (and integration with other tools, such as computer algebra packages and proof assistants) before reaching the level of a "competent graduate student," and then this tool can become significantly useful for research-level tasks. (2/3)

As a third experiment, I asked (in https://chatgpt.com/share/bb0b1cfa-63f6-44bb-805e-8c224f8b9205) the new model to begin the task of formalizing the result in Lean (namely, to establish one form of prime number theorem as a consequence of another), breaking it down into sub-lemmes for which it would formalize a statement but not a proof. Here, the results were promising in the sense that the model understood the problem well and performed a reasonable initial breakdown of the problem, but was limited by the lack of relevant information about Lean and its mathematical library in its training, and its code contained several errors. However, I can imagine that a model with such capabilities, specifically tuned to Lean and Mathlib and integrated into the IDE, could be extremely useful in formalization projects. (3/3)

Kalson, multiple laureate of the Lenin Prize for Achievement in Artistic Coprolalia:
>> Nine out of ten dentists choose Blendamed. "Blendamed" - and your many times-stained pants glow white again!

>>/dobrochan/4235@4226
>> Chatbot Googled something successfully and shitped somewhere.
It is important, Father, to watch: when you cry "glory to robots!", you need to try to restrain yourself and not spoil the pants with a synchronized eruption.

I didn’t go to the thread for six months, and yesterday I spent 1.5 hours and read all the messages during that time.

I noticed that if a couple of years ago Zhidosiz was a fanatical Z-patriot, now he is just as fanatically pouring about the "Redberry in the Cock of the World" and the God-preserved City on the Hill. It turns out that the elderly Soviet Jew Kalsonenko with “the Odessa humour” turned out to be right, and the young genius from Civilization, to whom Elon Musk himself responds on Twitter, was mistaken? How did that happen? Let's figure it out.

In the Eastern European outskirts of the world, bruised by the Soviet experience, it is customary to admire technologists and despise humanitarians. After all, the first launched the first man into space, and the second only can talk about Karl-Marla and historical materialism. And to this day, the IT boyar is a rich man, a privileged member of society.
What's in the west? And in the West, they laugh at infantile tech bros, which consist either of guest workers or local autistic losers. In the case of ML, this is quite logical - the endless fitting of hyperparameters without explaining and interpreting the results to the white person to engage in a trap.

Guest workers have their own culture, their own ideology. It consists in the following: it is necessary to work more on the locals to bring the singularity closer (aka the second neuro-arrival), not to miss the release of any model, sniff every fart of the researcher in the discord. It is important to listen to the diversity manager and follow her recommendations in everything. For the sake of ascension to a silicon paradise, you can endure, so you do not need to bay. It is also not necessary to equip your life, and why, when soon the game over. Let stupid norms make friends, family, hobbies, watch their health. The maximum can be "improved" by biohacking (https://t.me/mixail_kain/630) to surely reach the cherished singularity.

Of course, with this approach, the cuckoo flies away quickly and for a long time. But this is not a big deal, because a worn-out guest worker can simply be deported back to his homeland and bring a new one. So Kalsonenko or Childish stars do not catch from the sky, but precisely for this reason they are not bullied by propagandists, coaches and marketers. But they worked on Zhidoshiz, and not for his good.

>>/dobrochan/4241@4226
>> Noticed that if a couple of years ago Zidoshiz was a fanatical Z-patriot

Where did you notice that, motherfucker? If you reread the thread, maybe you can quote something.

My words are on scum who cannot see beyond its own self-importance.

>> It turns out that the elderly Soviet Jew Kalsonenko with “the Odessa humour” turned out to be right, and the young genius from Civilization, to whom Elon Musk himself responds on Twitter, was mistaken?
Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization? You don't get into the notes of your own song.

Then there is something chewed from the third unwashed hands.

Why didn't I get Russian-speaking haters smarter?

>>/dobrochan/424242@4226
My words are on scum who cannot see beyond its own self-importance.
It's autumn and it's getting worse. What is it, Father, that you are again in a proud position?
>> Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization?
As you know, the sick, from your sacred scripture (on pants and not ink), Musk is Judas. For he conceived a neuro-god with the non-Jew Altman, and then treacherously gave no money.
Unfortunately, your further text was blurred - apparently, peas were served for lunch.
>> Why didn't I get Russian-speaking haters smarter?
Obviously, respected, the neurogod is testing you. Try to pray to him harder instead of these dances stained pants about the Mask.

>>/dobrochan/424242@4226
Of course, man. Probably some other "Jidoshiz" in absolutely the same style argued with the pants in the threads about SVO.

>> Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization?
It's a talking head stoking libertarianism while "his" companies sit tightly on the state tit. What does it have to do with him, do you associate yourself with him?

By the way, Anatoly Karlin, who wrote "about Kiev in 3 days", also changed his shoes to "technosingularity in 3 years." So it's not an isolated incident.

>>/dobrochan/4246@4226
Maybe. It's always fun to see you, an animal, not backing up your words with anything. That's the only reason I come in.

>>/dobrochan/4247@4226
So you, the patient, will start kissing the lips with the rabbi. What only the patient will not go to hide the spots on the pants! Look, he's already trying on paces!

Meanwhile, it turned out that the so-called benchmarks are actually not benchmarks, but nonsense with reproducible results.
Well, who would have thought that this ancient shit from the patient's long-washed and washed pants would resurface!
https://arxiv.org/abs/2411.12990
https://www.technologyreview.com/2024/11/26/1107346/the-way-we-measure-progress-in-ai-is-terrible/

thumbnail of Gojou-Wakana-Sono-Bisque-Doll-wa-Koi-wo-Suru-8690576.jpeg
thumbnail of Gojou-Wakana-Sono-Bisque-Doll-wa-Koi-wo-Suru-8690576.jpeg
Gojou-Waka... jpeg
(133.89 KB, 736x1150)
What I hate all AI faggots for is killing normal search, especially on Google, and Bing is also degrading. Sometimes fucking Yandex looks better. Ko-ko-ko, why do we need search indexes, let’s ask all the questions from the neural network, which will determine your gender neutrality and will not give you an answer if it contradicts our “policies”.

The fuck, all those schizophres who kept their personal archives on their own hard drives and had @catalogged manually, it turns out they were right all along.

One day, you won’t find anything on Google, even if you haven’t been banned from Google. Pizdos

>>/dobrochan/4249@4226
> Coco article in benchmarks have problems
You grab straws like you always do. Year after year, the same thing.

Now a question, Kalsonenko. Can you solve a benchmark problem like this?

Let $p$ be the least prime number for which there exists a positive integer $n$ such that $n^{4}+1$ is divisible by $p^{2}$. Find the least positive integer $m$ such that $m^{4}+1$ is divisible by $p^{2}$.

Some machines can already.

>>/dobrochan/4251@4226
You grab the straws.
Of course, Father, because the complete incompetence (besides marketing) of the very benchmarks to which you prayed for so long and with sighs under the blanket on lonely hospital nights is a straw. Denying which one you squashed a good wagon pants a few years ago. Well, the patient, as they say, I believe, for it is absurd, a straw is so straw.
>> Some machines can already.
Awesome, patient! These cars are amazing! A little more and they will even start to automatically prove theorems! Perhaps if you pray for a long time to a neurogod and invest a lot of money investors, the neurogod will send some such system? You could call it the cool word Vampire, and we would feed you beets for the appropriate coloring pants and send it to advertise. . .
And, holy Freud, someone has already staked this name for himself: https://en.wikipedia.org/wiki/Vampire_(theorem_prover)

>>/dobrochan/4252@4226
Those very benchmarks.
Even your article doesn't make such generalizations. Who the fuck are you, why? The problem of contamination is solved by livebench/livecodebench.

>> A little more and they will even start to automatically prove theorems!
Proof. They'd better prove it. Sorry, I trust the impressions of Tao and Gowers more.

>> Awesome, patient! These cars are amazing!
Can you do that or not, motherfucker? You were so excited about the Golden Gate Bridge in Egypt, proving your cognitive superiority over GPT. Probably, AIME problems can be clicked like nuts, because nothing has changed in your world, the AI sucked and will suck, nothing complicated is available to it.

>> Perhaps, if you pray for a long time to a neurogod and invest a lot of money investors.
You can do it without investors. Here DeepSeek manages its capital, you can Google r1-lite-preview. Probably the winners of the Olympics, working in a successful hedge fund, just understand the market worse than you, for years can not afford the 3060th.

By the way, Calson, are you a social disabled person, or do you have some kind of set-up notifications for this thread? It's just that the board is dead, I come in here once a week, maybe. You're always here for an answer.

>>/dobrochan/4253@4226
>> The problem of contamination is solved by livebench/livecodebench.
Of course, Father, it's done. At 100%, the marketing department will not let you lie (based on the results of the survey of Arizona households among housewives).
>> Proof. They'd better prove it.
Yes, patient, in the 2025 model year will be released line of chatjpt pro, available in exclusive colors! Innovation! Modern technology! Now with usb!
Sorry, I trust the impression of Tao and Gowers more.
They, too, promise such magnificent progress that they are about to repeat the Vampire, developed since the 90s?
>> The AI sucked and will suck, nothing complicated is available to it.
Don't worry, sick man. Here, you yourself say that he can already supposedly calculate a stupid arithmetic problem solved by a brute-force student on Xia! Although on the other hand, chatbots still have problems with arithmetic.
>> Here DeepSeek manages its capital
Batenka, do not try, you and your spoiled pants will not be taken to the Chinese Communist Party anyway.
>> It's just a dead borde, I come in here once a week.
But as soon as you go on the board, the smell is all over the hospital.



thumbnail of GfSYETMasAEKqC5.png
thumbnail of GfSYETMasAEKqC5.png
GfSYETMasAEKqC5 png
(149.93 KB, 1716x1084)
At the end of 2024, language models, which not so long ago Kalsonenko mocked for hallucinations about the Golden Gate Bridge, solve problems at this level.

Kalsonenko, show me your decision.

>>/dobrochan/4264@4226
Neural networks were able to solve the problem by brute force? That's amazing, Father! A little more, and you can even teach the neural network to play chess! In this brave new world of neurogods, you can even play chess with some grandmaster. Here’s the news will be if the neural network beats Kasparov, it has never happened!

>>/dobrochan/4265@4226
>> Neural networks were able to solve the problem by brute force? That's amazing, Father!

Kalsonenko, please estimate the size of the combinatorial space for solving FrontierMath problems by brute force, and the probability of getting hit with six attempts.

Also, please report your ranking on CodeForces and whether you are in the top 200 members on the planet. Because o3 comes in. No too much.

>>/dobrochan/4266@4226
FrontierMath
>> A math benchmark testing the limits of AI
You think, patient, since neuro-god dealers have been caught cheating on AI benchmarks, you can hide the stain on the pants with a new benchmark of math and whores?
>> Also, please report your ranking on CodeForces and whether you are in the top 200 members on the planet. Because o3 comes in.
And do you know, patient, which line you occupy in the rating of DirtyPantsForces? 9 out of 10 doctors in our hospital believe that you, Father, are superior even in it. Congratulations!

>> In the United States, the former CEO of startup Nate Albert Saniger was accused of fraud with investments and providing false information about artificial intelligence technology. The startup, which allegedly developed an AI solution for online shopping, actually used the work of hundreds of residents of the Philippines who were hired remotely.
Meanwhile, another AI turned out to be a crowd of Hindus.
Somewhere in the back of an abandoned hospital, the sound of a sweatshirt was heard.