Software To Recognize Faces Is Found To Be Biased
The majority ( )of commercial facial-recognition systems exhibit bias ( ), according to a study from a federal agency released recently, underscoring ( )questions about a technology increasingly used by police departments and federal agencies to identify suspected criminals.
The systems falsely identified African American and Asian faces 10 times to 100 times more than Caucasian faces, the National Institute of Standards and Technology reported. Among a database of photos used by law enforcement ( )agencies in the United States, the highest error rates came in identifying Native Americans, the study found.
The technology also had more difficulty identifying women than men. And it falsely identified older adults up to 10 times more than middle-aged adults.
The new report comes at a time of mounting ( )concern from lawmakers and civil rights groups over the proliferation ( )of facial recognition. Proponents ( ) view it as ( )an important tool for catching criminals and tracking terrorists. Tech companies market it as a convenience that can be used to help identify people in photos or in lieu of ( )a password to unlock smartphones.
Civil liberties experts, however, warn that the technology — which can be used to track people at a distance without their knowledge — has the potential to lead to ubiquitous ( ) surveillance ( ), chilling freedom of movement and speech. Last year, San Francisco, Oakland and Berkeley in California and the Massachusetts communities of Somerville and Brookline banned government use of the technology.
“One false match can lead to missed flights, lengthy interrogations ( ), watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the FBI, Customs ( )and Border Protection and local law enforcement must immediately halt ( )the deployment ( )of this dystopian ( )technology.”
The federal report is one of the largest studies of its kind. The researchers had access ( )to more than 18 million photos of about 8.5 million people from American mug shots ( ), visa applications and border-crossing databases.
The National Institute of Standards and Technology tested 189 facial-recognition algorithms ( )from 99 developers, representing the majority of commercial developers. They included systems from Microsoft, biometric ( )technology companies like Cognitec, and Megvii, an artificial intelligence company in China.
The federal report confirms earlier studies from MIT that reported that facial-recognition systems from some large tech companies had much lower accuracy ( )rates in identifying the female and darker-skinned faces than the white male faces.
臉部辨識系統 有種族性別「偏見」
根據美國聯邦機構新近發表的研究結果,多數商用臉部辨識系統存在偏差,凸顯出這種科技的問題,而警察部門和聯邦機構使用這項科技辨識可疑罪犯的程度正不斷增高。
國家標準及科技研究所指出,比起白人臉孔,這些系統錯誤辨識非裔美國人和亞洲臉孔的機率高出10至100倍。研究發現,在美國執法單位使用的照片資料庫中,辨識美國原住民的錯誤率最高。
這項科技辨識女性也比男性更困難,而且辨識老人的錯誤率高達中年人的10倍。
這份新報告發表之際,國會議員和公民運動團體對臉部辨識的普及正日益感到憂慮。支持者視它為抓捕罪犯和追蹤恐怖分子的重要工具,科技公司將它行銷成方便好用的東西,可辨識照片中的人或作為密碼解鎖智慧手機。
然而,公民自由專家警告,這項科技能在人們不知情的狀況下從一定距離外追蹤他們,有可能導致無所不在的監控,壓抑行動和言論自由。去年,加州的舊金山、奧克蘭和柏克萊,以及麻州的薩默維爾和布魯克萊恩等社區,皆明令禁止政府使用此一科技。
「一個錯誤的比對,就可能導致錯過班機、冗長的訊問、列入觀察名單、與警察緊張的遭遇、錯誤逮捕甚至更糟的狀況。」美國公民自由聯盟的政策分析師傑.史丹利在聲明中這麼說。「包括聯邦調查局、海關及邊境保護局和地方執法單位等政府機構,必須立即停止部署這項反烏托邦科技。」
這份聯邦報告是同類報告中規模最大者之一。研究人員取用來自美國嫌犯大頭照、簽證申請及跨越邊境資料庫中,約850萬人的1800萬張以上的照片。
國家標準及科技研究所測試來自99個開發商的189個臉部辨識演算程式,涵蓋大多數商用開發商,包括微軟系統,生物科技公司如Cognitec,以及中國人工智慧公司曠視科技。
這份聯邦報告證實麻省理工學院先前的研究結果,一些大型科技公司的臉部辨識系統,辨識女性和較深膚色臉孔的正確率,比辨識白人男性低得多。
#高雄人 #學習英文 請找 #多益達人林立英文
#高中英文 #成人英文
#多益家教班 #商用英文
#國立大學外國語文學系講師
#新聞英文
speech recognition applications 在 AppWorks Facebook 的最佳解答
【What's Up, AI? A Week of Tech Announcements】
This week in AI, Microsoft, Google, and Facebook all had some sort of announcement or press surrounding their AI activities. For all you founders interested in the world of AI, let's take a closer look together at them and what they've been up to recently.
Google: 4 new machine learning APIs for developers were released by Google earlier this week. The new APIs are (1) object detection, (2) tracking, (3) on-device translation, and (4) AutoML Vision Edge. This will help developers with more base APIs, UX, and greater customization. This release is on par with Google's vision for open source: bringing better technology to the world by promoting open source.
Facebook: On the topic of open source, Facebook continues with its commitment to the wider developer community (and specifically the AI community) by announcing Pythia, a modular plug-and-play framework that allows data scientists to quickly build, reproduce, and benchmark AI models. Also available on Github.
Microsoft: In an annual machine-translation competition (WMT19), Microsoft Research Asia came out top 8 of 19 different language-translation challenges, beating out other tech firms like Yandex and IBM. This is significant for them because they developed a speech-recognition system which transcribes as well as a human, so winning the competition solidifies their credibility in the space. Congrats Microsoft!
As we continue to see AI creation and adoption grow, we can only expect to hear about AI on the daily. In the meantime before AI becomes mainstream, check out this video from Code Bullet where he makes an AI that learns to walk while avoiding a death laser. Have a great weekend!
Applications are now open for AW#19. If you're an AI or blockchain startup, apply now here >> http://bit.ly/2JDsU8q
- Natalie Feng Lin, Analyst
speech recognition applications 在 AOPEN Taiwan Facebook 的精選貼文
人工智慧=未來趨勢?! AOPEN在浪潮跟你一起IoT🤣🤣
#李開復先生怎麼說
#各位看官意下如何
#人工智慧這樣行👊👊
#偵測到你無所遁形
#人潮就是錢潮🤑🤑🤑🤑🤑
#AOPEN #建碁
【李開復Quartz專文:「人形機器人」將進入千家萬戶是無稽】人工智能時代的到來已經指日可待了。但是目前情感機器人、家庭機器人都離我們很遠。下面是外媒Quartz跟我的約稿:
▲ 機器人能幫著賺錢、省錢、提高生產力,也助人們回歸人性
人工智能時代的到來已經指日可待了。但是目前情感機器人的發展方向有點南轅北轍。
首先,讓我們明確一下人工智能的幾個要點:
人工智能擅長對目標明確的工作進行優化(但是不能創造,沒有感情)。
機械控制的發展速度較人工智能軟件的發展要緩慢得多。
傳感器雖然得到迅猛發展,但價格昂貴、體積偏大且太耗電。
鑒於以上原因,人形機器人將馬上進入千家萬戶的說法,簡直是無稽之談。當機器人在言談舉止各方面都與人類極其相似時,普通家庭用戶對機器人的「人類素質」的期望也會變得高不可攀。僅僅這種期望所帶來的失望就足以讓很多公司的「未來十年讓科幻小說成為現實」的展望受挫,更別提消費市場對價格的苛刻要求了。
機器人的開發要牢記實用性這一原則:機器人或能創造效益,或能節省成本,或能提高生產,或可以提供娛樂。依託現有技術製造的工業機器人將高效製造出其它機器人;商用機器人將會帶來更多經濟收益(例如替代保安、前台和司機等職位);家用機器人將能發揮家用電器和玩具的功能——它們簡單易用且不具備任何「人性素質」。
這樣的機器人未必具備人類外形。工業機器人就是在黑暗廠房(例如富士康最先進的廠房)或者配備了智能升降機倉庫里(例如我們投資的開源機器人Dorabot)從事勞務的機器;商用機器人的形式和用途就更多樣了:它們也許就是一排攝像頭(例如曠視科技的產品),或者是一家自動商店(例如F5未來商店)。自動駕駛車將有車的外形——除了那種低速貨運、功能固定的運輸工具,例如機場鋪設的自動車道,或者從停車場到商店、主題公園的運輸設備(例如UISEE馭勢科技);消費機器人也許會像一個揚聲器(例如亞馬遜的Echo)、一台電視機、一台吸塵器(例如Roomba)、一個教學玩具(例如奇幻工房的Dash Bot)或者一台用於家庭聯繫的平板電腦(例如小魚在家)。
人工智能也會與時俱進嗎?這一點毋庸置疑。聲音識別技術將更精准,電腦視覺技術也會提高,SLAM技術將讓機器人的動作更加流暢,機器人將會翻譯,還會針對限定領域進行對話。機器人也可能會瞭解我們的情緒並能模仿人類的情緒。這種情緒模仿將從搞笑的、娛樂性的發展為一定程度上能產生共鳴的模仿。誠然,這種模仿也都不是自發性的。在未來數十年,機器人還不能獨立進行常識性的推理、創造及規劃工作,它們也不會擁有自我意識、情感及人類的慾望。那種「全知全能人工智能」尚不存在,而且現在已知的開發技術也無法開發出此類機器人。這種技術在未來數十年都不會出現,也許永遠都不會出現。
人形機器人的研發對人工智能科學家充滿了誘惑力,而對人形機器人的預測也順理成章地激發著科幻小說家們的創作靈感。但是我們和人工智能有著本質區別:我們會創造,AI只會在創造的基礎上優化;我們多愁善感,AI冷酷無情;我們具備常識判斷能力,而AI只會從特定領域的大數據獲得信息。一言以蔽之,人類所長正是AI所短,而AI所長也是人類所短。
展望未來,人類最前沿的領域將是創造及社交領域。因此,我們應該推動機器人向它們所擅長的領域發展,例如進行重復性工作、優化工作或者創造財富的實用性工作。而我們人類也應該做一些我們擅長的工作:創新、創造、社交溝通或者娛樂。
我一直倡導要開發一些實用性機器人,鼓勵人們進入服務行業。但我不支持製造「類人」機器人。這種機器人開發難度大,而且永遠無法滿足人們的期望,因此,這種機器人的勝算不大。我分析的正確與否暫且不論,但是有一點我們需要有清晰的認識,那就是,未來十年,AI將大規模地取代那些依靠人力的、重復性的、分析性的崗位。因此,我們要肩負起創造更多社會服務性崗位的職責,而不是空想或謀劃一個充斥著「不適用於人類」職位的社會。
◀英文原文▶
Robots should make money, save money, increase productivity, or deliver entertainment—and let humans be human
Robots should make money, save money, increase productivity, or deliver entertainment—and let humans be human
The age of artificial intelligence (AI) and robotics is upon us, but the current fad of emotional humanoid robots is not headed in the right direction.
First, let’s understand what robotics are based on:
AI algorithms which are very good at optimization of explicitly defined goals (but cannot create, and have no feelings)
Mechanical control which advances much slower than AI software algorithms
Sensors which are rapidly improving but are often still too expensive, too large, or too power-hungry
Given the above, it is ludicrous to think that human-like robots will roam our homes any time soon. When a robot looks like a person, talks like a person, and has features like a person, home users will have unattainable human-capability expectations. The disappointment alone will doom any company hoping to bring science fiction to the living room in the next decade, not to mention the price-sensitivity for consumer markets.
Robotics must begin with utilitarianism in mind—robots should make money, save money, increase productivity, or deliver entertainment. There will be industrial robots that build other robots in high-volume, manufactured with today’s technologies. There will be commercial robots that deliver economic value (such as replacing security, receptionists, and drivers). There will be consumer robots that mimic today’s appliances and toys, requiring no consumer education, and causing no human-capability expectation.
These robots won't look like a person. The industrial robot is a giant factory run in the dark by machines (like at Foxconn’s most advanced factories), or a warehouse with smart forklifts (like our investment Dorabot). The commercial robot comes in various forms and applications. It might look like an array of cameras (like our investment Megvii) or an automated store (like our investment F5 Future Store). The autonomous vehicle will look like a car, except will be first deployed in low-speed, freight, or fixed-function transport—such as in airport autonomous car-only lanes, or in transport from parking garages to shopping malls/theme parks (like our investment UISee). And the consumer robot may look like a speaker (like the Amazon Echo), a TV, a vacuum cleaner (like Roomba), an educational toy (like our investment Wonder Workshop Dash Bot), or a pad-on-steroids for family communications (like our investment Ainemo).
Will AI capabilities increase over time? Of course. Speech recognition will get better, computer vision will improve, SLAM will be improved to help the robot move around fluidly, and the robot will be able to translate languages, or have a dialog within limited domains. The robot may be able to read some of our emotions, or mimic certain human emotions. But this mimicking will go from laughable and entertaining to occasionally acceptable—and generally not genuine. For decades to come, robots by themselves will be unable to learn common sense reasoning, creativity, or planning. They also won't possess the self-awareness, feelings, and desires that humans do. This type of “general AI” does not exists, and there are no known engineering algorithms for it. I don’t expect to see those algorithms for decades, if ever.
Trying to make robots human-like is a natural temptation for robotics and AI scientists, and predicting humanoid robots comes naturally to science fiction writers. But we humans simply think differently from AI. We create and AI optimizes. We love and AI is stoic. We have common sense and AI learns patterns from big data in a singular domain. Simply stated, we are good at what AI is not, and AI is good at what we are not.
In the future, the human edge will be in creativity and social interaction. Therefore, we need to focus robotics development toward what they’re good at: repetitive tasks, optimization, and utilitarian value creation. We should also let people do what they’re good at: innovation, creation, human-to-human interaction, and performing services.
I am an advocate of making utilitarian robots, and encouraging people to go into service jobs. I am not an advocate of making humanoid service robots—it is too hard today, and will not meet people’s expectations; therefore they will likely fail. Whether or not my analysis is correct, we need to be reminded that in the next decade AI will replace a massive number of manual-labor, repetitive, and analytical jobs. We have a human responsibility to help create societal service jobs—not dream or plan a society in which all jobs come with a sign “humans need not apply.”
speech recognition applications 在 Speech Recognition: Applications, Features & Future - LinkedIn 的相關結果
Speech recognition technology has many applications, from virtual assistants to translation and custom voice commands. Google Translate is a ... ... <看更多>
speech recognition applications 在 Speech recognition and its use cases explained | NordVPN 的相關結果
Speech recognition is software that converts human speech into text or another machine-readable format. This constantly evolving technology ... ... <看更多>
speech recognition applications 在 Top 11 Voice Recognition Applications in 2023 - AIMultiple 的相關結果
Explore speech recognition applications like voice search, speech-to-text, voice command & biometrics in industries including automotive ... ... <看更多>