format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4230
    [1] => dobrochan
    [2] => 4230
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4230]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4231
    [1] => dobrochan
    [2] => 4231
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4231]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4232
    [1] => dobrochan
    [2] => 4232
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4232]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4233
    [1] => dobrochan
    [2] => 4233
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4233]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4234
    [1] => dobrochan
    [2] => 4234
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4234]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4235
    [1] => dobrochan
    [2] => 4235
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4235]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4241
    [1] => dobrochan
    [2] => 4241
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4241]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/424242
    [1] => dobrochan
    [2] => 424242
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][424242]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/424242
    [1] => dobrochan
    [2] => 424242
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][424242]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4246
    [1] => dobrochan
    [2] => 4246
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4246]
format - board is missing in lookups: [dobrochan]
what we have [Array
(
    [0] => >>dobrochan/4247
    [1] => dobrochan
    [2] => 4247
    [3] => 
)
]
format - (>>/malform/35) thread is missing in lookups: [dobrochan][4247]
Singularity 2024-2025 - Доброчан - Endchan Magrathea

/dobrochan/ - Доброчан

Бункер доброчана для того, чтобы можно было пережидать всякие перетряски.


New Thread
X
Max 20 files0 B total
[New Thread]

[Index] [Catalog] [Banners] [Logs]

thumbnail of GWkSrpUa8AAD89Y.jpeg
thumbnail of GWkSrpUa8AAD89Y.jpeg
GWkSrpUa8AAD89Y jpeg
(170.48 KB, 2706x948)


>>/dobrochan/4231@4226
How many jigawatt clusters do not build, and the bubble will soon burst. It is urgent to sell news about virtual clusters to lochs, raise money for “ethical AI” or cut Osloeb sheikhs like Altman https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0

thumbnail of Screenshot 2024-09-12 at 2.27.07 PM.png
thumbnail of Screenshot 2024-09-12 at 2.27.07 PM.png
Screenshot... png
(217.78 KB, 1160x1386)
>>/dobrochan/4232@4226
It's about to burst! Investors are about to flee from the scammer Shmul Altman and the other Zuckerbrins da Sutzkevers! And the era of Calsonno-Aryan computing will come, without any Bogomerz neural networks!

Meanwhile, in reality.

OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA).

I predicted all this back in 2021. Suck your shit, subhuman.

thumbnail of Screenshot_20240913-210328~2.png
thumbnail of Screenshot_20240913-210328~2.png
Screenshot... png
(766.12 KB, 1080x2020)
>>/dobrochan/4233@4226
OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces)
Nine out of ten dentists choose Blendamed. "Blendamed" - and your many times-stained pants glow white again!

>>/dobrochan/4234@4226
Terence Tao, Fields Prize winner:

I experimented a bit with OpenAI’s new iteration of GPT, GPT-o1, which performs the initial stage of reasoning before launching the language model. It’s certainly a more powerful tool than previous versions, though it still struggles with the most challenging math research-level problems.

Here are some specific experiments (with a prototype version of the model I was given access to). In https://chattgpt.com/share/2ecd7b73-3607-46b3-b855-b29003333b87, I repeated an experiment from https://mathstodon.xyz/@tao/109948249160170335 in which I asked GPT to answer a vaguely formulated mathematical question that could be solved by determining a suitable theorem (Cramer's theorem) from the literature. Previously, the GPT might have mentioned some relevant concepts, but the details were hallucinated nonsense. This time, Kramer’s theorem was identified and a satisfactory answer was given. (1/3)

In https://chatgpt.com/share/94152e76-7511-4943-9d99-1118267f4b2b, I gave the new model a challenging complex analysis task (for which I had previously asked GPT4 to help write proof in https:/ /chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4). Here, the results were better than previous models, but still a bit disappointing: the new model could come to the right (and well-written) solution, *if* provide many clues and leading questions, but did not independently generate key conceptual ideas and made some non-trivial mistakes. The experience was roughly comparable to trying to advise a mediocre but not entirely incompetent graduate student. However, this was an improvement over previous models, whose capabilities were closer to a truly incompetent graduate student. It may take just one or two further iterations of enhanced capabilities (and integration with other tools, such as computer algebra packages and proof assistants) before reaching the level of a "competent graduate student," and then this tool can become significantly useful for research-level tasks. (2/3)

As a third experiment, I asked (in https://chatgpt.com/share/bb0b1cfa-63f6-44bb-805e-8c224f8b9205) the new model to begin the task of formalizing the result in Lean (namely, to establish one form of prime number theorem as a consequence of another), breaking it down into sub-lemmes for which it would formalize a statement but not a proof. Here, the results were promising in the sense that the model understood the problem well and performed a reasonable initial breakdown of the problem, but was limited by the lack of relevant information about Lean and its mathematical library in its training, and its code contained several errors. However, I can imagine that a model with such capabilities, specifically tuned to Lean and Mathlib and integrated into the IDE, could be extremely useful in formalization projects. (3/3)

Kalson, multiple laureate of the Lenin Prize for Achievement in Artistic Coprolalia:
>> Nine out of ten dentists choose Blendamed. "Blendamed" - and your many times-stained pants glow white again!

>>/dobrochan/4235@4226
>> Chatbot Googled something successfully and shitped somewhere.
It is important, Father, to watch: when you cry "glory to robots!", you need to try to restrain yourself and not spoil the pants with a synchronized eruption.

I didn’t go to the thread for six months, and yesterday I spent 1.5 hours and read all the messages during that time.

I noticed that if a couple of years ago Zhidosiz was a fanatical Z-patriot, now he is just as fanatically pouring about the "Redberry in the Cock of the World" and the God-preserved City on the Hill. It turns out that the elderly Soviet Jew Kalsonenko with “the Odessa humour” turned out to be right, and the young genius from Civilization, to whom Elon Musk himself responds on Twitter, was mistaken? How did that happen? Let's figure it out.

In the Eastern European outskirts of the world, bruised by the Soviet experience, it is customary to admire technologists and despise humanitarians. After all, the first launched the first man into space, and the second only can talk about Karl-Marla and historical materialism. And to this day, the IT boyar is a rich man, a privileged member of society.
What's in the west? And in the West, they laugh at infantile tech bros, which consist either of guest workers or local autistic losers. In the case of ML, this is quite logical - the endless fitting of hyperparameters without explaining and interpreting the results to the white person to engage in a trap.

Guest workers have their own culture, their own ideology. It consists in the following: it is necessary to work more on the locals to bring the singularity closer (aka the second neuro-arrival), not to miss the release of any model, sniff every fart of the researcher in the discord. It is important to listen to the diversity manager and follow her recommendations in everything. For the sake of ascension to a silicon paradise, you can endure, so you do not need to bay. It is also not necessary to equip your life, and why, when soon the game over. Let stupid norms make friends, family, hobbies, watch their health. The maximum can be "improved" by biohacking (https://t.me/mixail_kain/630) to surely reach the cherished singularity.

Of course, with this approach, the cuckoo flies away quickly and for a long time. But this is not a big deal, because a worn-out guest worker can simply be deported back to his homeland and bring a new one. So Kalsonenko or Childish stars do not catch from the sky, but precisely for this reason they are not bullied by propagandists, coaches and marketers. But they worked on Zhidoshiz, and not for his good.

>>/dobrochan/4241@4226
>> Noticed that if a couple of years ago Zidoshiz was a fanatical Z-patriot

Where did you notice that, motherfucker? If you reread the thread, maybe you can quote something.

My words are on scum who cannot see beyond its own self-importance.

>> It turns out that the elderly Soviet Jew Kalsonenko with “the Odessa humour” turned out to be right, and the young genius from Civilization, to whom Elon Musk himself responds on Twitter, was mistaken?
Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization? You don't get into the notes of your own song.

Then there is something chewed from the third unwashed hands.

Why didn't I get Russian-speaking haters smarter?

>>/dobrochan/424242@4226
My words are on scum who cannot see beyond its own self-importance.
It's autumn and it's getting worse. What is it, Father, that you are again in a proud position?
>> Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization?
As you know, the sick, from your sacred scripture (on pants and not ink), Musk is Judas. For he conceived a neuro-god with the non-Jew Altman, and then treacherously gave no money.
Unfortunately, your further text was blurred - apparently, peas were served for lunch.
>> Why didn't I get Russian-speaking haters smarter?
Obviously, respected, the neurogod is testing you. Try to pray to him harder instead of these dances stained pants about the Mask.

>>/dobrochan/424242@4226
Of course, man. Probably some other "Jidoshiz" in absolutely the same style argued with the pants in the threads about SVO.

>> Is Musk a Z-patriot from Chelyabinsk or a champion of Western Civilization?
It's a talking head stoking libertarianism while "his" companies sit tightly on the state tit. What does it have to do with him, do you associate yourself with him?

By the way, Anatoly Karlin, who wrote "about Kiev in 3 days", also changed his shoes to "technosingularity in 3 years." So it's not an isolated incident.


>>/dobrochan/4247@4226
So you, the patient, will start kissing the lips with the rabbi. What only the patient will not go to hide the spots on the pants! Look, he's already trying on paces!

Meanwhile, it turned out that the so-called benchmarks are actually not benchmarks, but nonsense with reproducible results.
Well, who would have thought that this ancient shit from the patient's long-washed and washed pants would resurface!
https://arxiv.org/abs/2411.12990
https://www.technologyreview.com/2024/11/26/1107346/the-way-we-measure-progress-in-ai-is-terrible/

thumbnail of Gojou-Wakana-Sono-Bisque-Doll-wa-Koi-wo-Suru-8690576.jpeg
thumbnail of Gojou-Wakana-Sono-Bisque-Doll-wa-Koi-wo-Suru-8690576.jpeg
Gojou-Waka... jpeg
(133.89 KB, 736x1150)
За что ненавижу всех AI пидоров - за то что убили нормальный поиск, особенно в Google, и Bing тоже деградирует. Блять, иногда сраный гэбнявый Яндекс лучше ищет. Ко-ко-ко, зачем нам поисковые индексы, давайте лучше спрашивать все вопросы у нейросети, которая будет определять вашу гендерную нейтральность  и не будет вам давать ответа, если он противоречит нашим "полиси"

Пиздец какой-то, все те шизы, что хранили личные архивы на собственных жестких дисках и СОРТИРОВАЛИ @ КАТАЛОГИЗИРОВАЛИ вручную, оказывается, все это время были правы.

В один момент, ты не найдешь ничего в Google, даже если в самом этот гугле тебя не забанили. Пиздос

 >>/4249/
>  кокок статья в бенчмарках есть проблемы
за соломинки хватаешься, как всегда. Год за годом одно и то же. 

А теперь вопрос, кальсоненко. Ты можешь решить, например, такую проблему из бенчмарка?

>  Let $p$ be the least prime number for which there exists a positive integer $n$ such that $n^{4}+1$ is divisible by $p^{2}$. Find the least positive integer $m$ such that $m^{4}+1$ is divisible by $p^{2}$. 

Некоторые машинки уже могут.

 >>/4251/
> за соломинки хватаешься
Конечно, батенька, ведь полная некомпетентность (помимо маркетинга) тех самых бенчмарков, на которые вы так долго и с вздохами молились под одеялом одинокими больничными ночами - это же соломинка. Отрицая каковую вы перепортили добрый вагон кальсон несколько лет назад. Ну что ж, больной, как говорится, верую, ибо абсурдно, соломинка так соломинка.
> Некоторые машинки уже могут.
Восхитительно, пациент! Эти машины просто поражают! Еще немного, и они даже начнут автоматически доказывать теоремы! Возможно, если долго молиться нейробоженьке и вкладывать много-много денег инвесторов, то нейрогосподь ниспошлет какую-нибудь такую систему? Можно было бы назвать ее крутым словом Vampire, а вас бы мы покормили свеклой для соответствующей окраски кальсон и отправили бы ее рекламировать...
А, пресвятой Фрейд, кто-то уже застолбил себе это название: https://en.wikipedia.org/wiki/Vampire_(theorem_prover)

 >>/4252/
> тех самых бенчмарков
Даже твоя статья не делает таких обобщений. Кому ты пиздишь, зачем. Проблема контаминации решена livebench/livecodebench, например.

>  Еще немного, и они даже начнут автоматически доказывать теоремы!
Доказывают. Будут лучше доказывать. Извини, я больше доверяю впечатлению Тао и Гауэрса. 

>  Восхитительно, пациент! Эти машины просто поражают!
Так можешь или нет, долбоёб? Ты так упоённо спорил про мост Золотые Ворота в Египте, доказывая своё когнитивное превосходство над GPT. Наверное, и проблемы AIME сможешь щёлкать как орешки, ведь в твоём мирке ничего не изменилось, ИИ как сосал так и будет сосать, ничего сложного ему не доступно.

>  Возможно, если долго молиться нейробоженьке и вкладывать много-много денег инвесторов
Можно и без инвесторов. Вот DeepSeek своими капиталами справляется, можешь погуглить r1-lite-preview. Наверное, победители олимпиад, работающие в успешном хедж-фонде, просто понимают рыночек хуже, чем ты, годами не могущий себе позволить 3060-ю.

Кстати Кальсон. ты социальный инвалид или у тебя как-то настроены уведомления на этот тред? Просто борда мёртвая, я сюда захожу раз в недельку, может. А ты всегда тут как тут для ответа.

 >>/4253/
> Проблема контаминации решена livebench/livecodebench, например.
Конечно, батенька, решена. На все 100%, маркетинговый отдел не даст соврать (по результатам опроса домохозяйств штата Аризона среди домохозяек).
> Доказывают. Будут лучше доказывать.
Да, пациент, в 2025 модельном году будет выпущена линейка чатжпт про, доступная в эксклюзивных цветах! Инновации! Модерн текнолоджи! Теперь и с usb!
> Извини, я больше доверяю впечатлению Тао и Гауэрса.
Они тоже обещают настолько великолепный прогресс, что вот-вот повторят Vampire, разрабатываемый с 90-х годов?
> ИИ как сосал так и будет сосать, ничего сложного ему не доступно.
Ну что вы, больной, не расстраивайтесь. Вот, вы сами говорите, что он уже якобы может подсчитать тупую арифметическую задачку, решаемую перебором студентом на Сях! Хотя с другой стороны, у чатботов все еще проблемы с арифметикой.
> Вот DeepSeek своими капиталами справляется
Батенька, не старайтесь, вас с вашими испорченными кальсонами все равно не возьмут в компартию Китая.
> Просто борда мёртвая, я сюда захожу раз в недельку
Зато как сходите на борду, так аромат по всей больнице стоит.



thumbnail of GfSYETMasAEKqC5.png
thumbnail of GfSYETMasAEKqC5.png
GfSYETMasAEKqC5 png
(149.93 KB, 1716x1084)
На конец 2024 года языковые модели, которые не так давно т-щ кальсоненко высмеивал за галлюцинации про мост Золотые Ворота, решают задачи вот этого уровня. 

Т-щ кальсоненко, покажите ваше решение.

 >>/4264/
Нейросети сумели решить задачу перебором? Восхитительно, батенька! Еще немного, и можно будет даже научить нейросеть играть в шахматы! В этом дивном новом мире нейробоженьки можно будет даже сыграть в шахматы с каким-нибудь гроссмейстером. Вот новость будет, если нейросеть обыграет Каспарова, никогда ведь такого не было!

 >>/4265/
>  Нейросети сумели решить задачу перебором? Восхитительно, батенька! 

Кальсоненко, оцени пожалуйста размер комбинаторного пространства для решения задач FrontierMath перебором, и вероятность попасть с шести попыток. 

Ещё пожалуйста сообщи свой рейтинг на CodeForces, и входишь ли ты в верхние 200 участников на планете. Потому что o3 входит. Без "перебора".

 >>/4266/
> FrontierMath
> A math benchmark testing the limits of AI
Полагаете, пациент, что раз уж дилеров нейробоженьки поймали на накрутке бенчмарков для "ИИ", то получится спрятать пятно на кальсонах с помощью нового бенчмарка с математикой и шлюхами?
> Ещё пожалуйста сообщи свой рейтинг на CodeForces, и входишь ли ты в верхние 200 участников на планете. Потому что o3 входит. 
А знаете ли вы, больной, какую строчку вы занимаете в рейтинге DirtyPantsForces? 9 из 10 врачей нашей больницы считают, что вы, батенька, превосходите в нем даже чатжпт. Поздравляем!