BEGIN:VCALENDAR
PRODID;X-RICAL-TZSOURCE=TZINFO:-//Calagator//EN
CALSCALE:GREGORIAN
X-WR-CALNAME:Calagator
METHOD:PUBLISH
VERSION:2.0
BEGIN:VTIMEZONE
TZID;X-RICAL-TZSOURCE=TZINFO:America/Los_Angeles
BEGIN:STANDARD
DTSTART:20231105T020000
RDATE:20231105T020000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED;VALUE=DATE-TIME:20240208T015833Z
DTEND;TZID=America/Los_Angeles;VALUE=DATE-TIME:20240220T190000
DTSTART;TZID=America/Los_Angeles;VALUE=DATE-TIME:20240220T180000
DTSTAMP;VALUE=DATE-TIME:20240208T015833Z
LAST-MODIFIED;VALUE=DATE-TIME:20240208T015833Z
UID:http://calagator.org/events/1250480957
DESCRIPTION:Last month\, we heard about the use of Artificial Intelligenc
 e (AI) technologies in our educational system\, but these technologies a
 re also being incorporated into many other commercial and social enterpr
 ises that impact our daily lives\, including the fields of medicine\, jo
 urnalism\, finance\, human resources\, law enforcement\, and transportat
 ion\, just to name a few.&#13\;\n&#13\;\nWhile AI technologies may be be
 neficial to society\, how do we know that the systems being developed ar
 e trustworthy and that they actually do what their creators claim? Can d
 evelopers explain how their AI systems work and demonstrate that the out
 puts they generate are not biased? How might governments regulate these 
 systems? Should companies be allowed to regulate themselves? How might g
 overnments and companies work together to ensure fairness and understand
 ability of what the systems are doing?&#13\;\n&#13\;\nLast year\, World 
 Privacy Forum\, a privacy-focused research nonprofit\, studied various A
 I governance tools currently in use around the world. They recently publ
 ished their findings via a report that was co-authored by Pam Dixon\, ex
 ecutive director of World Privacy Forum\, and Kate Kaye\, deputy directo
 r of the organization:&#13\;\nhttps://www.worldprivacyforum.org/2023/12/
 new-report-risky-analysis-assessing-and-improving-ai-governance-tools/&#
 13\;\n&#13\;\nAt this month's meeting\, World Privacy Forum’s Kate Kaye 
 will join us to share the details of their research methodologies and wh
 at they learned about how governments are overseeing the implementation 
 of AI in their countries. She'll give an overview of what AI is and what
  it does\, and she'll also present some examples of both effective and i
 neffective approaches to good governance of these systems.&#13\;\n&#13\;
 \nBring your questions and thoughts about AI governance\, and come join 
 the discussion!&#13\;\n&#13\;\nPlease RSVP via this Meetup page or by se
 nding an email to ta3mevents@pdxprivacy.org.&#13\;\n&#13\;\n&#13\;\nSpea
 ker bio:&#13\;\n&#13\;\nKate Kaye is a Portland resident and deputy dire
 ctor of World Privacy Forum\, a nonpartisan public-interest research non
 profit. Her research focuses on the implications of AI\, digital identit
 y and health data ecosystems\, data governance\, and other issues relate
 d to data collection\, use and privacy.&#13\;\n&#13\;\nBefore joining Wo
 rld Privacy Forum\, Kate worked for more than 20 years as an award-winni
 ng journalist covering data\, emerging technology and the impact of tech
  on people and society. Her reporting has been seen and heard in MIT Tec
 hnology Review\, NPR\, Protocol\, Bloomberg CityLab\, OneZero\, WSJ\, Fa
 st Company\, and other media outlets. &#13\;\n&#13\;\nKate is the founde
 r of tech and AI ethics reporting website RedTailMedia.org. RedTail has 
 been home to some of her work investigating algorithmic and surveillance
  tech policy and use in Portland including Banned in PDX\, a podcast ser
 ies about Portland’s facial recognition ban\, and an investigation of th
 e city’s collapsed partnership with Google-sibling Replica\, a location 
 and mobility tracking company. Kate is the author of the 2009 book on di
 gital voter data use\, Campaign ’08: A turning point for digital media.&
 #13\;\n&#13\;\n&#13\;\n&#13\;\nBy attending this TA3M meeting\, you agre
 e to follow our Code of Conduct:&#13\;\nhttps://www.meetup.com/Portlands
 -Techno-Activism-3rd-Mondays/pages/22681732/Code_of_Conduct/&#13\;\n&#13
 \;\n{short} Code of Conduct&#13\;\nPortland's Techno-Activism 3rd Monday
 s is dedicated to providing an informative and positive experience for e
 veryone who participates in or supports our community\, regardless of ge
 nder\, gender identity and expression\, sexual orientation\, ability\, p
 hysical appearance\, body size\, race\, ethnicity\, age\, religion\, soc
 ioeconomic status\, caste\, or creed.&#13\;\n&#13\;\nOur events are inte
 nded to educate and share information related to technology and activism
 \, and anyone who is there for this purpose is welcome. Because we value
  the safety and security of our members and strive to have an inclusive 
 community\, we do not tolerate harassment of members or event participan
 ts in any form.&#13\;\n&#13\;\nAudio and video recording are not permitt
 ed at meetings without prior approval.&#13\;\n&#13\;\nOur Code of Conduc
 t (https://www.meetup.com/Portlands-Techno-Activism-3rd-Mondays/pages/22
 681732/Code_of_Conduct/) applies to all events run by Portland's TA3M. P
 lease report any incidents to the event organizer.\n\nTags: Artificial I
 ntelligence\, AI\, algorithms\, privacy\, policy\, digital rights\, tech
 nology\n\nImported from: http://calagator.org/events/1250480957
URL:https://www.meetup.com/portlands-techno-activism-3rd-mondays/events/2
 99039934
SUMMARY:How governments are making AI more responsible\, fair and explain
 able
LOCATION:Online: placeholder for on-line events\,    
SEQUENCE:2
END:VEVENT
END:VCALENDAR
