📄 中文摘要
在美国,数百个市镇利用AI系统预测犯罪、识别个体进行增强监控,并决定人们是否被拦截、调查、逮捕或驱逐。这些系统大多在没有立法监督的情况下运作,受影响的居民往往对此一无所知。以PredPol和Geolitica为例,这些系统通过历史犯罪数据生成巡逻目标,但历史数据反映的是警方过去的关注点,而非犯罪客观发生的地点。这种模式导致更多的巡逻、更多的逮捕,从而形成自我实现的预言。圣克鲁斯市已禁止使用此类系统。
📄 English Summary
The Algorithm State: How Government AI Surveillance Is Rewriting the Rules of Policing
Across the United States, numerous municipalities deploy AI systems to predict crime, identify individuals for enhanced surveillance, and make critical decisions regarding stops, investigations, arrests, or deportations. Most of these systems operate without legislative oversight, leaving affected residents unaware of their existence. For instance, systems like PredPol and Geolitica use historical crime data to generate patrol targets. However, this historical data reflects where police previously focused their efforts rather than where crime objectively occurs. This creates a self-fulfilling prophecy: increased patrols lead to more arrests, which in turn confirms the areas as 'high crime,' prompting even more patrols. Santa Cruz has banned such systems.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等