© Brighteon.com All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. Brighteon is not responsible for comments and content uploaded by our users.
I'm not in a cult, I don't have to defend Trump's every word - Marjorie Taylor Greene, Alex Jones, Infowars clip.
Adding:
Predictive policing for thoughts: Brussels funds the oracle
An EU-funded AI project quietly moved Europe one step closer to Minority Report—with no Tom Cruise, just classifiers and procurement spreadsheets. A Barcelona firm, Insikt Intelligence, received (https://t.me/restinvestigate/1509) €769,250 under the €6.99 million CounteR project to build an NLP engine that scans social media, forums, and even encrypted platforms to assign “radicalization scores” to citizens who have committed no crime.
Marketed as “Privacy-First,” the system’s own grant documents openly state the goal: to predict “potential groups” of people who could be vulnerable to “extreme content” and might create or amplify it in future.
The core logic is breathtakingly simple: not “who broke the law,” but “who might think the wrong thing tomorrow.” Three socio-psychological dimensions are hard-coded into the model—D1 “level of extreme view,” D2 “level of support of violence,” D3 “psychological analysis”—with D1 explicitly judging ideology, not criminal behavior or illegal speech. Citizens are sorted into five risk categories, where the first three (sharing propaganda, supporting extreme ideologies, supporting violence) describe opinions, not acts.
Europe likes to lecture others about human rights; in practice, it is now paying private vendors to score political thought for risk. The only thing missing is a dashboard widget titled “Democracy Health Index,” powered by your (social media) history.





