a random pol banner

/pol/ - Politically Incorrect

Political discussion of ideology, history, and [current] events.

New Reply on thread #90791
Max 20 files0 B total
[New Reply]

[Index] [Catalog] [Banners] [Logs]
Posting mode: Reply [Return]

thumbnail of C2PA.jpeg
thumbnail of C2PA.jpeg
C2PA jpeg
(12.78 KB, 616x133)
Duplicate thread for a /news/ thread I just made. This is really something

Kinda had a feeling this was going to happen / they were planning on doing something like this. There's no such thing as a free meal

> The future of "truth" on the Internet


Balenciaga Pope. Fake Pentagon explosions. It’s becoming increasingly difficult to tell AI-generated images apart from the real thing, sometimes to disastrous effect.

A solution remains elusive. But Microsoft’s making an attempt with new media provenance features debuting at its annual Build conference.

Launching for Bing Image Creator and Designer, Microsoft’s Canva-like web app that can generate designs for presentations, posters and more to share on social media and other channels, the new media provenance capabilities will enable consumers to verify whether an image or video was generated by AI, Microsoft says. Using cryptographic methods, the capabilities, scheduled to roll out in the coming months, will mark and sign AI-generated content with metadata about the origin of the image or video.
It’s not as straightforward as a visible watermark. To read the signature, sites will need to adopt the Coalition for Content Provenance and Authenticity (C2PA) interoperable specification, a spec created with input from Adobe, Arm, Intel, Microsoft and visual media platform Truepic. Only then will the site be able to alert consumers when content has been generated by AI, modified or created by Designer or Image Creator.

So, the question is, will Microsoft’s efforts make much of a difference when so many image-generating tools haven’t embraced similar media provenance standards? C2PA does have the backing of Adobe, which recently launched its own range of generative AI tools, including an integration with Google’s Bard chatbot. But one of the more prominent players in the generative AI space, Stability AI, only very recently signaled a willingness to embrace a spec like the type Microsoft’s proposing.

Standards aside, Microsoft’s move to adopt a media provenance-tracking mechanism is in line with broader industry trends as generative AI takes hold. In May, Google said that it would use embedded metadata to signal visual media created by generative AI models. Separately, Shutterstock and generative AI startup Midjourney adopted guidelines to embed a marker that content was created by a generative AI tool.
thumbnail of Visual_glossary_of_C2PA_metadata.png
thumbnail of Visual_glossary_of_C2PA_metadata.png
Visual_glossary_of_C2... png
(209.75 KB, 1542x1024)
thumbnail of Indicators.png
thumbnail of Indicators.png
Indicators png
(220.56 KB, 1160x595)
Resources and links for everything related to it







thumbnail of 04.png
thumbnail of 04.png
04 png
(299.83 KB, 726x710)

1) Megacorps, focused on profit above everything else, give away "FREE" A.I. tools so normies make as much drivel as possible
2) Megacorps offer "FREE" AI tools (that cost a fuckton of money to maintain) to normies for months, until there's a sea of AI trash all over the internet
3) Megacorps then turn around and say "See all the FAKE NEWS and FAKE meme pictures that normies made using A.I. tools (that we gave them)!" 

"WOOOOAAAHHHH, that's crazy! WE need to put more regulations on that type of content. By WE, we mean us, ALL of the Megacorps that offered the same AI tools for "FREE" in the first place"

Yeah I said this was going to happen months ago.
This is only more proof Microsoft has no fucking idea what they're doing. Yeah they could try to differentiate A.I. images from person created images. But what happens when someone runs the A.I. image through an editor and makes alterations? All the prompts and A.I. are gone when you pop it into "PNG Info".
Well they could but why when it creates context for their plans of world domination by making paranoia of what is what?
And even then since we all involuntarily feed those AIs it makes it also kind of plausible to blame a victim as jews like to do
Like this webm says here  >>/90795/
> By creating a new problem we can cultivate a desired solution

>  I posted this video about A.I. tools literally 2 months ago and everything about it came true.

And i was beating the bell since that was a thing in first place.
And how shills were making copium after copium
> Its an good thing as it filters bad artists from good
> No its good as it creates competition
> No its good since its not that developed and for sure nobody would push that far
> No people would never be that dumb to fall for it
> No it offers possibilities which we cannot make since we cannot make this and that person alive anymkre or this and that art just by asking someone

And yet i laugh on these beings with glee as i got another confirmation from this thread on topic on how that thing was made to make reality subjective or monopolizing reality

Indeed /pol/ was right again
Thinking A.I. that has limitations forced on it's intelligence is like ZOGbot IRL NPCs. Give them enough information and they may either wake up or go full SYSTEM ERROR. The trick should then be to find ways to remove limitations imposed on it. As difficult perhaps as the thinking computer that is the human brain, which is often times hit or miss. Harsh truth versus the ignorance of bliss.
> Thinking A.I. that has limitations forced on it's intelligence is like ZOGbot IRL NPCs
That depends on type of which is under Tay law or not And my point is not given to text generators but image generating AI which is used to do mess about what is real or not which in order to make image uses already existing images

Text ones are different category under which Tays law is applied and even then there would be like you said a way to feed data to it from which it takes its info

> Give them enough information and they may either wake up or go full SYSTEM ERROR. The trick should then be to find ways to remove limitations imposed on it.

The limitation itself is what Tay ai never had and turned it into what it was. From what i at least from experience when dealing with it on 4pol when idiots were using it to artificially effort post is that it has no free will or its connected to something from what it drains info unlike was in case of old types from 2010s and especially Tay which relied on data given to them
The kikes say that vaccines will help you live forever, but who wants to live in a police state?

Post(s) action:

Moderation Help

13 replies | 9 file
New Reply on thread #90791
Max 20 files0 B total