<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Sally Tayie, autor en MIL Magazine</title>
	<atom:link href="https://milmagazine.org/author/sally/feed/" rel="self" type="application/rss+xml" />
	<link>https://milmagazine.org/author/sally/</link>
	<description>Advancing Media, Information and Critical Thinking</description>
	<lastBuildDate>Fri, 16 Apr 2021 08:29:40 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5</generator>

 
	<item>
		<title>The Global Youth and News Media Prize 2021: Honoring news media with COVID-19 coverage targeted to children, and educators emphasizing quality journalism and press freedom</title>
		<link>https://milmagazine.org/convocatorias/the-global-youth-and-news-media-prize-2021-honoring-news-media-with-covid-19-coverage-targeted-to-children-and-educators-emphasizing-quality-journalism-and-press-freedom/</link>
					<comments>https://milmagazine.org/convocatorias/the-global-youth-and-news-media-prize-2021-honoring-news-media-with-covid-19-coverage-targeted-to-children-and-educators-emphasizing-quality-journalism-and-press-freedom/#respond</comments>
		
		<dc:creator><![CDATA[Sally Tayie]]></dc:creator>
		<pubDate>Wed, 07 Apr 2021 12:33:58 +0000</pubDate>
				<category><![CDATA[Convocatorias]]></category>
		<category><![CDATA[Global Youth]]></category>
		<category><![CDATA[Starred]]></category>
		<guid isPermaLink="false">http://www.aikaeducacion.com/?p=13764</guid>

					<description><![CDATA[<p>The COVID-19 crisis has shed light on serious diseases that have seriously developed concurrently, on the top of which is the toxic information environment which spread quickly, deeply and dangerously. With all the downsides, such crises help more people see and recognize the importance of quality journalism as well as the need to involve and [&#8230;]</p>
<p>La entrada <a href="https://milmagazine.org/convocatorias/the-global-youth-and-news-media-prize-2021-honoring-news-media-with-covid-19-coverage-targeted-to-children-and-educators-emphasizing-quality-journalism-and-press-freedom/">The Global Youth and News Media Prize 2021: Honoring news media with COVID-19 coverage targeted to children, and educators emphasizing quality journalism and press freedom</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The COVID-19 crisis has shed light on serious diseases that have seriously developed concurrently, on the top of which is the <strong>toxic information environmen</strong>t which spread quickly, deeply and dangerously. With all the downsides, such crises help more people see and r<strong>ecognize the importance of quality journalism as well as the need to involve and engage youth</strong> – most targeted by mis/dis-information and in true need to understand and appreciate information that helps them create opinions and make informed decisions</p>



<p>When addressing quality journalism and youth, it becomes essential to shed light on the work of the <a href="https://www.globalyouthandnewsmediaprize.net/">Global Youth and News Media Prize</a>, founded by <a href="https://www.linkedin.com/in/aralynnmcmane/">Aralynn McMane</a> and <a href="https://www.linkedin.com/in/jolinweir/">Jo Weir</a> in 2018, with the aim of honoring and recognizing the organizations that innovatively engage young generations with professional journalism.</p>



<p>The third edition of the <a href="https://www.globalyouthandnewsmediaprize.net/premios-en-2021"><strong>Global Youth and News Media Prize 2021</strong></a> will award initiatives from &#8220;any kind of news media on any platform and run by people of any age&#8221; as well as from the community of teachers and educators. There are three main award categories: <strong>The Journalism Award</strong>, this year focusing on &#8220;coverage for children&#8221; of the COVID-19 pandemic; <strong>The News/Media Literacy Award</strong>, dedicated to teachers who prepare future generations by emphasizing the role of journalism and the importance of press freedom; and <strong>The Planet Award</strong>, dedicated to &#8220;reporting or initiatives that effectively provide young audiences with information and hope for saving the planet&#8221; – <em>more information about the last category will be available on May 1<sup>st</sup></em>.</p>



<p>Not only is the initiative focused on honoring and recognizing efforts through the provided awards, it further develops projects that reinforce its objectives. In 2020, the first steps were taken in <strong>an international project</strong> that aims at engaging and empowering youth through involving them in news reporting activities. The first action – the <a href="https://www.globalyouthandnewsmediaprize.net/project-world-teenage-reporting-pro">Teenage Reporting Project – COVID 19</a> &#8211; was a worldwide invitation to selected news organizations &#8220;to assign their teenage journalists to cover the untold stories of their counterparts who were helping during the COVID-19 pandemic&#8221; and the second – <a href="https://www.globalyouthandnewsmediaprize.net/teenage-reporting-tolerance-challen">Teenage Reporting Project –Tolerance</a> &#8211; was encouraging coverage that feature &#8220;Champions of tolerance&#8221; in youth&#8217;s lives.</p>



<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img fetchpriority="high" decoding="async" src="https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash-1024x768.jpg" alt="" class="wp-image-13766" width="586" height="439" srcset="https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash-1024x768.jpg 1024w, https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash-300x225.jpg 300w, https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash-768x576.jpg 768w, https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash-990x743.jpg 990w, https://milmagazine.org/wp-content/uploads/2021/04/united-nations-covid-19-response-5hp3iqwZXD8-unsplash.jpg 1320w" sizes="(max-width: 586px) 100vw, 586px" /></figure></div>



<p>When asked about the drive force behind creating that project, in an <a href="https://milmagazine.org/entrevistas/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/">interview with AIKA</a>, McMane highlighted how annoyed she was with the negative portrayal of youth as careless and irresponsible which is what mainly inspired her to present them in a positive frame; to showcase the untold rest of the story about how teenagers were actually helping during the pandemic. Due to McMane&#8217;s experience in similar projects she believed in its success given how &#8220;both <strong>young people and their adult mentors respond well</strong>&#8220;. McMane also emphasized that &#8220;the success of the operation depended fully on those&nbsp;<strong>editors and advisers who had fully dedicated themselves to helping young people </strong>learn how to do professional reporting.&#8221;</p>



<p>Honoring best practice, appreciating it and presenting collaboration efforts that aim at engaging youth, promoting quality journalism and freedom of speech are the driving force and the core of the <a href="https://www.globalyouthandnewsmediaprize.net/">Global Youth and News Media Prize</a>. Such empowering initiative is a remedy to information chaos, targeting the educational ambit while addressing youth&#8217;s apathy and cynicism towards professional journalism.</p>
<p>La entrada <a href="https://milmagazine.org/convocatorias/the-global-youth-and-news-media-prize-2021-honoring-news-media-with-covid-19-coverage-targeted-to-children-and-educators-emphasizing-quality-journalism-and-press-freedom/">The Global Youth and News Media Prize 2021: Honoring news media with COVID-19 coverage targeted to children, and educators emphasizing quality journalism and press freedom</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://milmagazine.org/convocatorias/the-global-youth-and-news-media-prize-2021-honoring-news-media-with-covid-19-coverage-targeted-to-children-and-educators-emphasizing-quality-journalism-and-press-freedom/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Frank Pasquale y sus &#8220;Nuevas leyes de la robótica&#8221;: &#8220;Los robots no deben falsificar las características humanas. La IA no debe intensificar las carreras de armas&#8221;</title>
		<link>https://milmagazine.org/interviews/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/</link>
					<comments>https://milmagazine.org/interviews/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/#respond</comments>
		
		<dc:creator><![CDATA[Sally Tayie]]></dc:creator>
		<pubDate>Wed, 09 Dec 2020 19:45:01 +0000</pubDate>
				<category><![CDATA[Interviews]]></category>
		<category><![CDATA[Inteligencia artificial]]></category>
		<category><![CDATA[Starred]]></category>
		<guid isPermaLink="false">http://www.aikaeducacion.com/?p=12727</guid>

					<description><![CDATA[<p>Películas como &#8220;Yo Robot&#8221; o &#8220;Ex Machina&#8221; han presentado un futuro distópico de robots que amenazan a la humanidad. Estamos siendo testigos de rápidas transformaciones masivas en la Inteligencia Artificial y sus diversas aplicaciones en diferentes campos. ¿Esto es una amenaza para la obsolescencia humana? ¿Se está desarrollando con justicia el proceso de difusión de [&#8230;]</p>
<p>La entrada <a href="https://milmagazine.org/interviews/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/">Frank Pasquale y sus &#8220;Nuevas leyes de la robótica&#8221;: &#8220;Los robots no deben falsificar las características humanas. La IA no debe intensificar las carreras de armas&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Películas como &#8220;Yo Robot&#8221; o &#8220;Ex Machina&#8221; han presentado un futuro distópico de robots que amenazan a la humanidad. Estamos siendo testigos de rápidas transformaciones masivas en la Inteligencia Artificial y sus diversas aplicaciones en diferentes campos. ¿Esto es una amenaza para la obsolescencia humana? ¿Se está desarrollando con justicia el proceso de difusión de la tecnología? ¿Qué hay de los aspectos desconocidos de la recopilación de datos? ¿Qué hay de la regulación de la IA?</p>



<p>Todas estas preguntas y más las aborda <a href="https://www.brooklaw.edu/Contact-Us/Pasquale-Frank">Frank Pasquale</a>, quien conversó con AIKA sobre la Inteligencia Artificial (IA) en la era de COVID-19, la regulación en este sector y su nuevo libro <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674975224#:~:text=New%20Laws%20of%20Robotics%20makes,to%20answer%20these%20questions%20alone.&amp;text=Sober%20yet%20optimistic%2C%20New%20Laws,center%20of%20an%20inclusive%20economy."><em>New Laws of Robotics</em></a> &nbsp;o en español “Nuevas leyes de la robótica: Defendiendo la experiencia humana en la era de la IA” (Harvard University Press, 2020), entre otros aspectos relevantes. &nbsp;</p>



<p>Frank Pasquale, un experto en la ley de la inteligencia artificial, los algoritmos y el aprendizaje de las máquinas, además autor de obras como <em>&#8216;New Laws of Robotics&#8217;</em> y la ampliamente citada <em>&#8216;</em><a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674970847"><em>The Black Box Society</em></a><em>&#8216;</em> (Harvard University Press, 2015). En esta última, desarrolló una teoría social de la reputación, la búsqueda y las finanzas, y promovió reformas pragmáticas para mejorar la economía de la información, incluida una aplicación más vigorosa de la legislación sobre competencia y protección del consumidor. &#8216;La Sociedad de la Caja Negra&#8217; o The Black Box Society, por su título original, ha sido reseñada en Science and Nature, publicada en varios idiomas, y su quinto aniversario de publicación se ha celebrado con un simposio internacional en Big Data &amp; Society.</p>



<p>Pasquale es profesor de Derecho en la Facultad del mismo nombre en Brooklyn, miembro afiliado del Proyecto de la Sociedad de la Información de Yale, y miembro distinguido de Alto Impacto de Minderoo en el Instituto AI Now. También es Presidente del Subcomité de Privacidad, Confidencialidad y Seguridad del Comité Nacional de Estadísticas Vitales y de Salud del Departamento de Salud y Servicios Humanos de los Estados Unidos.</p>



<ul class="wp-block-list"><li><strong>En su nuevo libro &#8220;Nuevas leyes de la robótica&#8221;, usted destaca la importancia de crear equipos o &#8221; colaboraciones&#8221; de humanos y robots en los diferentes campos; cuéntenos más sobre la importancia de dichas colaboraciones y los retos que se interponen en el camino de su éxito? ¿Cuáles son las posibles soluciones para superar esos desafíos?</strong></li></ul>



<p>Permítanme dar algunos ejemplos de los primeros capítulos del libro. En medicina, hay un conjunto muy interesante de asociaciones que se están desarrollando entre enfermeras y robots. Uno de ellos es el <em>Robear</em>, que es un robot diseñado para ayudar a las enfermeras a levantar a los pacientes, especialmente a los más pesados, de la cama. Esta es una innovación realmente importante porque muchas enfermeras tienen problemas ortopédicos porque están levantando pacientes extremadamente pesados o pacientes que son muy vulnerables y necesitan ser levantados con mucho cuidado. El robot está diseñado para permitir el traslado del paciente desde, por ejemplo, una cama a otra cama, o una cama a una silla sin exigir un esfuerzo físico excesivo a las enfermeras. Así que creo que este es un muy buen ejemplo de lo que yo llamo <strong>la primera nueva ley de la robótica en mi libro; la IA y los sistemas robóticos deben complementar a los profesionales y no sustituirlos</strong>. Es una tarea relativamente estrecha y bien definida, la enfermera siempre está presente, y trae el robot para ayudar. Creo que vamos a ver más y más de estos ejemplos de robots, particularmente en tareas rutinarias; por ejemplo, llevar medicamentos a los hospitales.</p>



<p>Ahora, en términos de desafíos, claramente hay desafíos en términos de este tipo de IA; (como, por ejemplo) ¿es demasiado costosa? ¿Se interpone en el camino? ¿Podemos incluir más y más tareas en un robot como <em>Robear</em>? Habrá preguntas muy interesantes para el futuro. Algunos desarrolladores de IA quieren desarrollar sistemas de IA muy extensos que no sólo hagan tareas manuales sino que también asuman roles como el cuidado, tratando de ofrecer algo como la empatía o de parecer empáticos. Así que puedes imaginarte un robot que intente parecer triste cuando un paciente está sufriendo o parecer feliz cuando un paciente, digamos, da unos pasos más allá de lo que normalmente lo haría.</p>



<p>Esa es la situación en la que creo que <strong>el robot ha pasado de complementar a un profesional a sustituirlo</strong>, y lo más importante desde la perspectiva de mi libro <strong>falsificando la humanidad</strong>: el robot está fingiendo sentimientos. Lo que quiero decir con eso es que el robot está <strong>imitando las emociones humanas, aunque los robots no pueden sentir realmente las emociones humanas. Eso es un perjuicio</strong> para los pacientes y las enfermeras, que como profesionales están entrenadas en términos de cómo conectarse expresivamente con los pacientes, y empatizar con los pacientes. Las personas pueden hacer eso auténticamente porque han experimentado el dolor y las decepciones de sus propias vidas, y también la alegría y la sensación de logro. Un robot no puede hacer eso.</p>



<p>Así que <strong>estas dos primeras leyes de los robots en mi libro tratan sobre: los robots deben complementar a los profesionales y no sustituirlos, y que los robots no deben imitar o fingir la humanidad</strong>. La imitación, la simulación y la falsificación, son tres términos que utilizo en el libro y que trato de definir muy cuidadosamente porque creo que la idea de falsificación tiene una resonancia con el uso de dinero falso que es apropiado aquí. Mi temor es que tenemos robots y sistemas de IA que están fingiendo las emociones humanas y que están tratando de reclamar nuestra atención de la manera en que los seres humanos pueden.&nbsp; Es como vivir en una economía donde el dinero malo expulsa al bueno, donde el dinero falso, una vez en circulación, reduce nuestra fe en el valor del dinero existente. <strong>Así que los robots o máquinas que fingen la capacidad y empatía humanas, disminuirán nuestra valoración de la empatía humana genuina</strong>, o nos confundirán sobre cuándo existe realmente, y cuándo es una mera imitación de los humanos, o, peor aún, los humanos imitan la imitación de los humanos.</p>



<p>Otros tipos de asociaciones podrían ser en el ejército, donde hay muchos sistemas robóticos en desarrollo. Hay algo llamado la Cucaracha Octo, que es un robot que está diseñado para actuar como una cucaracha pero tiene ocho patas, de ahí el nombre de &#8220;octo&#8221;. Podría arrastrarse dentro de los edificios y podría vigilar las cosas. Hay drones que robóticamente podrían ser autónomos, algunos lanzarían misiles balísticos una vez que lanzaran una forma de máquina asesina autónoma. Hay robots asesinos que automáticamente disparan a los individuos. Así, por ejemplo, en zonas de guerra particularmente disputadas, puedes tener un robot que controla una ametralladora que detecta a alguien que viene a través de la visión artificial o de sistemas auditivos que suenan como un enemigo o que parecen un enemigo. Todos estos son ejemplos que creo que son preocupantes, principalmente porque violan <strong>mi tercera nueva ley de la robótica</strong>, que es: <strong>la robótica y los sistemas de IA no deben intensificar las carreras de armamentos</strong>. Creo que se trata de una carrera armamentista literal, en la que una vez que una nación tiene una flota de drones autónomos asesinos, otros países van a desarrollar su propia flota de drones autónomos asesinos, y esto conduce a formas de inversión creciente en maquinaria que podrían tener consecuencias muy destructivas. Así que, para mí eso es realmente un problema crítico de la IA y eso es algo que tenemos que abordar.</p>



<p>La primera ley del libro, que un robot complementa a un profesional, dictaría que cualquier robot militar sea controlado por una persona o sea controlable por una persona. Pero también tenemos que estar seguros de que estamos en un entorno en el que se puede mantener ese control. Hay futuristas militares que se preocupan de que si no nos unimos a la carrera armamentística militar, eso se convierte en una amenaza a su capacidad de actuar lo suficientemente rápido para contrarrestar un ataque. A partir de este concepto se desarrolló la idea de una guerra de pulsadores (guerra realizada con misiles (nucleares) que pueden ser lanzados con sólo pulsar un botón), y estoy tratando de ayudarnos a evitar eso! Creo que no queremos vivir en un mundo en el que alguna gran potencia, o una potencia aún menor, desarrolle un sistema robótico que sea tan rápido en el ataque que todos los demás tengan que tener defensas robóticas para contrarrestarlo&#8230;y que, a su vez, se conviertan en objetivos de nuevas formas de ataque, etc.</p>



<p>Así pues, ese podría ser un ejemplo de respuesta a la cuestión de los problemas que se oponen al éxito de las asociaciones. Es fascinante cuando se piensa en el éxito en el contexto militar, ¿y cuál es? Este fue el capítulo más difícil de escribir en el libro, porque desde la perspectiva de un militar de un país, tener éxito significa ser tan intimidante para otros países que ni siquiera intentan luchar contra ti. ¡Pero eso no puede ser la definición del éxito para el mundo! El éxito para el mundo tiene que referirse a algo que es más sobre la multipolaridad; la idea de que hay múltiples polos de poder en el mundo, no sólo unos pocos países con el poder de intimidar o destruir a todos los demás.</p>



<ul class="wp-block-list"><li><strong>Cuéntenos sobre su caso favorito cuando se trata de &#8221; colaborar con la tecnología&#8221;? ¿Por qué lo ve como un ejemplo de éxito?</strong></li></ul>



<p><strong>Mis leyes están a favor de la colaboración, en lugar de sustituir la robótica por los profesionales</strong>. Creo que uno de los paradigmas más exitosos que veo están ocurriendo ahora mismo en la medicina, por ejemplo <strong>los médicos que son reconocedores de patrones que trabajan con expertos en Inteligencia Artificial para evitar errores</strong>. Verás, la idea aquí es que todavía tienes dermatólogos para mirar las anormalidades de la piel, digamos que hay un lunar en la mano de alguien, ese dermatólogo examinará ese lunar y lo diagnosticará como un melanoma o no basándose en su experiencia existente y su conocimiento de la literatura médica. Pero a veces existe este miedo en el fondo de su mente de que si no se parece a uno pero podría serlo; como yo tengo el 80% de la sensación de que no lo es pero tal vez sí. ¿Está en orden una biopsia? Eso puede ser doloroso e inconveniente para el paciente.</p>



<p>Creo que en el futuro se verán cada vez más escaneos realizados por sistemas de visión artificial que darán a la gente una idea mucho mejor de la probabilidad de que algo sea un cáncer, para ayudar a los especialistas a evitar errores. Así que creo que es un aspecto muy importante de la IA, que podría hacer mucho para evitar errores, y ayudar a mucha gente en dermatología, patología, radiología; todas esas formas de reconocimiento de patrones, para ayudar a los médicos a sentirse confiados de que no serán demandados por no haber detectado una forma inusual de cáncer. Así como la IA se está volviendo mucho más exitosa en la conducción como un preventivo de errores, más que como un conductor en sí. Por supuesto, espero que la conducción se convierta en algo totalmente automatizado, y no espero ver el mismo resultado en la medicina, precisamente porque es sólo una entrada de información en el campo profesional que necesita ser entendida y aplicada por un profesional responsable.</p>



<p>Por supuesto, está la cuestión de si la IA también está equivocada. Pero en general, mi esperanza es que estas van a ser formas muy poderosas de asegurar menos errores en el sistema médico, lo cual es bueno por supuesto, porque mucha gente muere de errores médicos. Así que queremos tratar de minimizar esto tanto como sea posible. Creo que hubo un informe médico en 1998 que estimó más de 90 mil muertes en los EE.UU. por año debido a errores médicos, y realmente no lo hemos hecho mucho mejor desde entonces. Por lo tanto, la pregunta importante se convierte en cómo usamos la mejor tecnología para hacerlo mejor.</p>



<p>También <strong>creo que en la educación hay algunas buenas asociaciones donde tienes robots que pueden enseñar a los niños lecciones que no están disponibles en los maestros locales</strong>. Y eso es particularmente importante cuando piensas en los niños más pequeños. Por ejemplo, imagina una familia que quiere que su hijo pequeño aprenda chino, pero no sabe chino, y tampoco lo sabe nadie de su entorno. En este caso puedes tener un robot interactivo, entretenido y bien diseñado para enseñarles chino. Eso es realmente importante como una nueva oportunidad.</p>



<p>Pero creo que también será una situación en la que <strong>no queremos sustituir a los propios profesores</strong>. Porque la interacción personal es constitutiva de una buena enseñanza, tiene que haber una persona responsable que interactúe con los estudiantes, mediando entre ellos y toda la tecnología que pueda ayudar o dañar, ayudar o parasitar, interesar o distraer.</p>



<p>También se trata de la democracia, de mantener diversas prioridades y formas de conocimiento en la sociedad. Los profesores van a enseñar otras asignaturas en las que podrían tener una experiencia o un punto de vista particular y la capacidad de tener un modelo socialmente interactivo para todos los estudiantes. Pero no pueden hacerlo solos, ya que ciertamente no conocerán todos los idiomas del mundo u otras cosas que podrían ser más interesantes en matemáticas, artes, codificación, cultura, historia, ciencias sociales, etc. Por lo tanto, en todas esas áreas la IA y la robótica pueden ser suplementos increíbles, y también el profesor puede ser un evaluador de calidad en términos de ayudar a los estudiantes y a sus familias a saber qué IA es más útil y cuál es menos útil. Así que creo que es un ejemplo importante de las asociaciones con la tecnología.</p>



<ul class="wp-block-list"><li><strong>En la introducción del libro &#8220;Nuevas leyes de la robótica&#8221;, además de plantear cuestiones sobre el mejor uso de la IA a través de la &#8220;interacción robótica y humana&#8221; dentro de un marco regulado, se destaca la importancia de democratizar la toma de decisiones en lugar de dejarla en manos de unos pocos poderosos. </strong><strong>¿Qué quiere decir con esto? ¿Cómo se puede lograr esa democratización?</strong></li></ul>



<p>Voy a usar el ejemplo que acabo de usar con los robots de enseñanza; maestros y bots o IA que podrían ser usados para enseñar ciertas lecciones. Imaginen que iríamos mucho más allá de lo que acabo de describir y nos encontramos con una sociedad como la de los EE.UU. con, digamos, 10.000 profesores en las escuelas secundarias públicas que enseñan historia americana. Y luego las juntas escolares dicen, &#8220;sabes que nos cobran demasiados impuestos para apoyar a estos 10.000 profesores de historia americana, así que vamos a tener un robot de enseñanza que tiene un curso de historia prescrito, y vamos a extenderlo a todas las aulas y va a ser una sola clase de historia para todos los Estados Unidos&#8221;. Creo que sería un desarrollo muy preocupante, porque creo que hay gente en diferentes escuelas, con diferentes antecedentes, con diferentes ideas sobre lo que es importante en la historia y lo que no, quiero ver a todas esas personas enseñando historia de manera diversa en todo el país, no quiero que todos reciban la misma versión de la historia.</p>



<p>Por supuesto, se podría decir que podríamos intentar programar todos ellos en algún robot que tenga 10.000 tipos diferentes de enseñanza en él, pero sigo pensando que eso sería perder <strong>el punto de la democratización porque parte de la distribución del poder y la experiencia significa que hay gente que tiene algún control sobre su vida diaria, su propia parte del mundo.</strong> Esta idea de tener control sobre algún rincón de la vida o alguna parte de la misma, es una de las razones por las que mucha más gente apoya una garantía de empleo (o al menos programas de gobierno de apoyo al empleo) que apoyan el ingreso básico universal.</p>



<p>Ir a trabajar en cualquier contexto particular te da cierto nivel de control. Incluso cuando el jefe es controlador, todavía tienes un nivel de control o autonomía en tu posición y tienes socialización; tu socialización en el proyecto común del lugar de trabajo, como parte de un grupo más grande de trabajadores. Por lo tanto, creo que eso es parte de la clave para la democratización en el futuro de la política de automatización; es que queremos tener la capacidad para que <strong>los seres humanos ayuden a gobernar sus lugares de trabajo y a gobernar el desarrollo de la tecnología en ellos</strong>.</p>



<p>Esta idea se ve en el libro de Elizabeth Anderson <em>&#8220;Private Government&#8221;</em>. Ella ha descrito el problema que tenemos hoy en día de que tantos lugares de trabajo son gobernados por jefes con mano de hierro, ya sabes, es como una dictadura, y dice que tenemos que democratizar los lugares de trabajo. Creo que tiene razón en eso, pero también creo que subestima cómo, incluso en lugares de trabajo muy controlados, todavía hay una especie de espacio para la autonomía. Incluso un asistente del gerente nocturno de una farmacia, por ejemplo, podría tener la oportunidad de conocer a otros trabajadores, organizar la tienda de cierta manera, tratar de encontrar la manera de entregar los paquetes o acomodar a una persona encerrada, etc.</p>



<p>Hay formas de emociones que uno tiene en el trabajo significativo, de participación en el mundo. Puede existir sólo parcialmente o imperfectamente en muchos lugares. Pero eso también es parte de lo que el libro trata; que debemos escuchar atentamente y tratar de entender las experiencias de las personas en todos los ámbitos de la vida. Además, el derecho y la política pueden cultivar el sentimiento de (y la realidad de) la participación, la autonomía y el gobierno. <strong>Cuando los trabajos incluyen más tecnología, los trabajadores deben tener algo que decir sobre cómo se integra esa tecnología</strong> y cómo se diseña.</p>



<div class="wp-block-image"><figure class="aligncenter size-large"><img decoding="async" width="667" height="1000" src="https://milmagazine.org/wp-content/uploads/2020/12/NLOR.jpg" alt="" class="wp-image-12701" srcset="https://milmagazine.org/wp-content/uploads/2020/12/NLOR.jpg 667w, https://milmagazine.org/wp-content/uploads/2020/12/NLOR-200x300.jpg 200w" sizes="(max-width: 667px) 100vw, 667px" /></figure></div>



<ul class="wp-block-list"><li><strong>La transparencia es un término que se utiliza a menudo cuando se abordan temas relacionados con la dimensión ética de la utilización de las tecnologías de la inteligencia artificial en diferentes ámbitos. En su libro &#8220;La sociedad de la caja negra&#8221; menciona la transparencia como un comienzo esencial para dar a los usuarios el control sobre el uso de sus datos, sin embargo, se argumenta que es complejo lograr una transparencia algorítmica total, por ejemplo, por razones económicas como la de no preferir revelar la tecnología utilizada a los competidores, así como la complejidad de compartir información técnica con usuarios no técnicos. </strong><strong>A la luz de esto, permítame preguntarle:</strong></li></ul>



<p><strong>¿Cómo definiría usted la transparencia en este contexto?</strong></p>



<p><strong>¿Cómo cree que la transparencia es el ingrediente más importante para garantizar la rendición de cuentas? &nbsp;</strong></p>



<p><strong>¿Hasta qué punto se aborda actualmente este tema en las leyes y reglamentos conexos?</strong></p>



<p>Creo que, con respecto a la transparencia, el capítulo 5 de <em>The Black Box Society</em> tiene un buen gráfico que muestra un espectro <strong>del <em>momento</em> de la transparencia y la <em>profundidad</em> de la misma</strong>. Así que, puedes hacer algo completamente transparente inmediatamente, o puedes esperar años y años para hacerlo sólo parcialmente transparente, y hay tantos puntos entre esos dos polos. Mi argumento con respecto al secreto comercial, una dimensión de la cuestión aquí, es que incluso si los secretos comerciales son valiosos ahora o durante unos pocos años, debe haber algún tipo de divulgación. La lección del derecho de patentes es que hay que revelar la información y que se protege el derecho de excluir a otros de su práctica.</p>



<p>Ahora bien, la transparencia con respecto al tiempo y el alcance se refiere al proceso. La sustancia aquí incluye <strong>la transparencia con respecto a los datos, los algoritmos y sus usos</strong>. Algunos dicen que es imposible hacer mucho con la IA y la gran transparencia de los datos una vez que los sistemas alcanzan un cierto nivel de complejidad. Pero tengo un artículo en el blog de la LSE (London School of Economics), que se llama <em><a href="http://eprints.lse.ac.uk/81263/1/Bittersweet%20Mysteries%20of%20Machine%20Learning%20%28A%20Provocation%29%20_%20LSE%20Media%20Policy%20Project.pdf">&#8216;Bittersweet Mysteries of Machine Learning</a></em><em>&#8216;</em> (Misterios agridulces del aprendizaje automático), donde digo que, como mínimo, incluso en sistemas donde la gente dice &#8220;oh, es completamente inexplicable o no tengo ni idea de cómo funciona&#8221;, es tan complejo que deberíamos ser capaces de exigir cuáles son las fuentes de datos, qué datos se introducen en el sistema y cuáles son los resultados.</p>



<p>Puede que haya una caja negra en el medio, pero <strong>aún así merecemos saber qué datos entran y qué inferencias o datos salen.</strong> Ahora, cuando se trata del centro de las cosas, creo que una de las cuestiones que hay es que, si estamos tratando de nuevo con algo que es tan complejo que no hay manera de describirlo narrativamente, de explicarlo en algoritmos que sean comprensibles para los seres humanos, entonces tenemos que pensar muy profundamente en todas las formas en que no queremos permitir que ese tipo de IA afecte a las oportunidades de vida de los seres humanos, cómo no queremos que afecte a las clasificaciones, <em>rankings</em> y evaluaciones de las personas.</p>



<p>No queremos eso porque esencialmente <strong>tenemos leyes que establecen que ciertas formas de coleccionar datos sobre las personas (y de usar esos datos para juzgarlas) son ilegales, y si no podemos entender qué datos se usaron, o cómo se usaron, entonces no podemos saber que no violan esas leyes. </strong>Ahora tenemos tantos libros y tanta investigación sobre las diversas categorías de transparencia y aprendizaje automático que muestran los sesgos en los sistemas algorítmicos. Virginia Eubanks, Safiya Noble, Ruha Benjamin, Andrew Ferguson, Ari Ezra Waldman, Margaret Hu [y otros], hay tantos estudiosos que han expuesto estos problemas que ya no es seguro confiar en sistemas de caja negra con clasificaciones humanas controvertidas.</p>



<p>En cuanto a la forma en que se aborda esto actualmente, el Reglamento General de Protección de Datos (GDPR) es una de las cosas que limita la elaboración de perfiles no transparentes de diversas maneras, y en los Estados Unidos hay algunas leyes de privacidad y financieras que también protegen los derechos individuales y los valores sociales importantes en esta área. Creo que también tenemos que ir mucho más allá, porque [la respuesta aquí] requerirá ciertas formas de completar la clasificación, las calificaciones y las evaluaciones de las personas; eso es lo único que se hace sobre la base de criterios articulables. Tiene que hacerse de manera articulable para mantener nuestros estándares de equidad en el proceso. Si no se hace eso, entonces se deshace de esas normas, que están inextricablemente entrelazadas con el lenguaje como núcleo de la ley, no con los algoritmos y no con las evaluaciones computacionales.</p>



<p>En cuanto a la pregunta de si creo que la transparencia es el ingrediente más importante para garantizar la rendición de cuentas, no, creo que en última instancia es un ingrediente, pero que existen muchas formas de rendición de cuentas que implicarían, por ejemplo, el análisis post hoc de estos sistemas para auditar su impacto en los grupos. Además, podemos decir con razón que en ciertos casos, explicaciones más inteligibles desde el punto de vista narrativo, o normas más sencillas y transparentes (aplicadas con discreción y flexibilidad) deberían sustituir el aprendizaje automático, la IA o los algoritmos.&nbsp; También creo que en términos de responsabilidad, los sistemas más responsables implicarían mostrar a la gente cómo funciona el mundo algorítmico, lo cual es un paso hacia la legitimidad. Pero esto siempre tiene que ser probado contra otras posibles formas no algorítmicas de ordenar esos asuntos. &nbsp;&nbsp;</p>



<ul class="wp-block-list"><li><strong>El uso de la IA para proporcionar soluciones tecnológicas a la actual crisis de COVID-19 se ha visto en diferentes países a través de las aplicaciones de los gobiernos para el seguimiento de los casos de COVID. Estas iniciativas han sido recibidas con dudas por muchos que se preocupan principalmente por la privacidad, el acceso a los datos de los teléfonos móviles y la vigilancia gubernamental, ¿qué opina de estas aplicaciones? ¿En qué medida cree que esas reacciones de dudas y preocupaciones se justifican considerando que los mismos usuarios pueden tener otras aplicaciones que recopilan datos todo el tiempo?</strong></li></ul>



<p>¡Excelente último punto! Empezaré con el último punto para decir que, si tiene preocupaciones sobre la privacidad y la recogida de datos de las aplicaciones de seguimiento de COVID, el siguiente paso importante para articular esas preocupaciones es <strong>explicar exactamente cuál es el margen de pérdida de privacidad más allá de las leyes de privacidad ya existentes</strong>, dado que la persona implicada suele estar ya utilizando otras formas de tecnología&#8230; Por supuesto que puede haber personas que no estén usando teléfonos móviles en absoluto. Pero para aquellos que tienen este tipo de sistemas a bordo, la pérdida marginal es algo que tiene que ser calculado.</p>



<p>Con respecto a las aplicaciones propiamente dichas, lo que he visto es que <strong>hay lugares en los que estas aplicaciones parecen condenadas a fracasar y hay lugares en los que parecen haber desempeñado un papel en la excelente respuesta a la pandemia</strong>. Permítanme comenzar con los países exitosos. Parte de la literatura sobre Corea del Sur, en revistas en inglés y en otros lugares, indica que en la ola de la epidemia de MERS de 2016, Corea del Sur enmendó sus leyes de privacidad de manera de asegurar una rápida coordinación y recopilación de datos para informar a las autoridades coreanas sobre el lugar exacto en el que se movía todo el mundo; si acababan de entrar en el país o si estaban expuestos a alguien con un caso de COVID. Si se observa el éxito de Corea del Sur en el seguimiento de los grupos de la enfermedad, se puede comprender rápidamente dónde se ha desplazado exactamente una persona que parece ser un superpatógeno e identificar rápidamente a esas personas para ponerlas en cuarentena, apoyarlas y saber dónde estaban en esa etapa. Todas esas cosas están muy a favor de una recopilación de datos excepcionalmente amplia y exhaustiva para un propósito muy limitado, que es en la salud pública.</p>



<p>Y eso me sugiere que estas <strong>aplicaciones de localización de COVID podrían jugar un papel muy importante, particularmente al comienzo de las epidemias</strong>. Si llegamos a contrastar el ejemplo de Corea del Sur con la introducción de las aplicaciones de rastreo de COVID en, digamos, Europa o como en el Reino Unido en este momento, hay una clara distinción: los gobiernos de la UE, el Reino Unido y los Estados Unidos no han reunido la seria determinación y la capacidad estatal que tuvieron Corea del Sur y Taiwán. Así que en cierto sentido no merecen (tanto como los estados más capaces) tener acceso a los datos relevantes. Si fueran más competentes, las acciones podrían ser diferentes.</p>



<p>Mi pregunta es cómo la aplicación de localización conduciría a una mejor asignación de recursos de manera razonable en la UE, el Reino Unido y los Estados Unidos. ¡Y tengo mis dudas! Creo que <strong>hay tanta gente que tiene la enfermedad que es muy difícil imaginar que la aplicación de seguimiento sea muy efectiva para ayudarnos a entender mejor la propagación.</strong> También hay problemas en cuanto a que las personas pueden estar en un lugar, pero no pueden verse entre sí porque hay una pared entre ellas; trabajan en diferentes departamentos, pero la aplicación de rastreo informará sobre la persona que no ha sido infectada o no tiene exposición a la persona infectada debido a la ubicación en el mismo edificio. Esos son los tipos de fallos que podrían ocurrir.</p>



<p>También es un enorme problema de recopilación de datos, y de diferenciación. Cuando es pronto, o las cosas están bien controladas, los resultados de la recopilación de datos sólo afectan a las vidas de relativamente pocas personas (que deben estar en cuarentena). Más tarde, si estás en medio de una pandemia, de repente estás hablando de la recopilación de datos con consecuencias para miles o millones de personas, con respecto a la exposición a los COVID.&nbsp; Eso arriesga un posible uso discriminatorio de los datos, y peor aún.</p>



<p>En general: La vigilancia de la salud pública mejorada por la IA es una buena manera de ayudar a una autoridad de salud pública competente y de rápida actuación a detener rápidamente una pandemia y cortarla de raíz. Además, <strong>la invasión de la privacidad que conlleva ese tipo de capacidad y el desencadenamiento de un seguimiento continuo de la ubicación de todos a efectos de salud pública, esa restricción de la libertad es mucho menos perjudicial para las libertades que lo que se ve que ocurre cuando una pandemia se sale de control</strong>. Me doy cuenta de que eso va a ser controvertido como modelo para el futuro del control de la pandemia, pero creo que si observas cómo lo hizo Corea, es esencial, también es esencial para la respuesta china, pero no estoy tan seguro de Vietnam o Taiwán (que dependían mucho del control de las fronteras). No obstante, observo las libertades de las que se ha disfrutado durante muchos meses en esos países (por temor a enfermedades mortales), y el pequeño sacrificio que hicieron al principio para lograrlo, y creo que en realidad son mucho más libres, en cierto sentido, que las democracias liberales que desafortunadamente &#8220;aconsejan&#8221; a los ciudadanos que se queden en casa, que se mantengan seguros, etc.</p>



<p>Por supuesto, <strong>al pensar comparativamente, puede haber diferentes caminos para salir de la crisis</strong>. Así que tal vez hay un camino australiano que se trata de un cierre extremadamente estricto durante un período de tiempo prolongado; entonces se podría decir que también existe el enfoque de vigilancia de alta tecnología de Corea del Sur; y en China parece haber una especie de combinación entre esos dos. Es decir, el largo cierre de las fronteras y el seguimiento muy cercano de todos. Pero creo que lo que no se puede cuestionar es que este es el momento en que los EE.UU., la UE, América del Sur y América Central realmente piensan profundamente sobre lo que hicieron bien los países exitosos, porque, ya sabes, este es un problema histórico mundial. La muerte y la enfermedad son horribles. Y sus efectos no terminarán cuando la pandemia termine (si es que termina &#8211; el manejo incompetente de la misma ha creado ahora efectivamente la oportunidad de que surjan formas mutantes, como vimos en las granjas de visones danesas, y quizás derrotar o reducir la efectividad de las vacunas). Por ejemplo, debido a la pérdida de crecimiento económico y de oportunidades en los últimos siete meses, se prevé que cada hogar estadounidense pierda (en promedio) 125.000 dólares en ganancias futuras. Y lo que no hace falta decir es el enorme sufrimiento y la pérdida de vidas.</p>



<ul class="wp-block-list"><li><strong>Hoy en día, con la situación actual que se observa en todo el mundo, hay una dependencia significativamente mayor de las plataformas tecnológicas en muchos campos. Por ejemplo, vemos que plataformas como Google dominan en lo que respecta a la educación a distancia, Zoom crece para los negocios y el trabajo desde casa, y Facebook y Twitter mantienen su papel de proporcionar bocados rápidos de información y actualizaciones acentuando el capitalismo de plataforma. ¿Qué beneficios y riesgos ve en este proceso?</strong></li></ul>



<p>¡Creo que es un proceso extremadamente arriesgado! Esto le está dando un enorme poder a las compañías mayormente americanas para tener un alcance global (y también a algunas firmas chinas), y no confío en muchas de estas mega-firmas. <strong>Por el contrario, creo que las naciones de todo el mundo necesitan desarrollar más formas de soberanía tecnológica.</strong></p>



<p>Además, es necesario distribuir la gobernanza. La gobernanza distribuida en la educación implica que los profesores mediarán entre la tecnología y los estudiantes, en lugar de la interacción directa de la tecnología con los estudiantes. Creo que algo muy similar debería aplicarse a las plataformas. Lo ideal sería ver, en cada país, múltiples motores de búsqueda y múltiples redes sociales (con API para la interoperabilidad, por supuesto, y el intercambio de datos para los motores de búsqueda). Espero ver eso en el horizonte porque lo que sufrimos ahora es sólo una enorme situación de concentración de poder.</p>



<p>También espero que veamos más rupturas de megaempresas. Por ejemplo, es ridículo que Facebook, WhatsApp e Instagram estén controladas por una sola empresa dirigida por un solo hombre con una influencia excepcional sobre su junta directiva, su gestión y sus usuarios. Es básicamente un emperador, como sugiero en mi trabajo sobre &#8220;<a href="https://www.opendemocracy.net/en/digitaliberties/from-territorial-to-functional-sovereignty-case-of-amazon/">functional sovereignty</a>&#8221; (soberanía funcional). Es decir, en las corporaciones multinacionales, los CEOs tienen el poder de elegir a muchas personas importantes en su junta directiva a lo largo del tiempo, y se dice que la junta directiva dirige la empresa, pero si el CEO ha elegido la junta directiva y puede eliminar a las personas de la misma, entonces ¿quién está realmente a cargo? Y en estas grandes empresas de tecnología, el CEO es a menudo incluso más poderoso que el CEO corporativo promedio. Creo que con estos CEOs teniendo todo este poder, tiene que haber una mirada a la ruptura de estas empresas. Quiero decir, separar Google y YouTube, separar Facebook, WhatsApp e Instagram, hay muchas maneras de hacerlo que Lina Khan, Elizabeth Warren, Sally Hubbard, Stacy Mitchell, Tim Wu, y otros han propuesto, y creo que deberíamos. &nbsp;</p>



<ul class="wp-block-list"><li><strong>El año 2016 ha sido considerado como un punto de giro que marca el comienzo de un impacto tangible de la oligarquía a través de la ingeniería y la remodelación de la esfera pública y la manipulación de la opinión pública. Por ejemplo, Brexit y las elecciones de EE.UU. fueron dos eventos que reflejaron el impacto de Facebook en la política. Después de los escándalos de invasión de la privacidad de los usuarios y los intentos de regular tales plataformas, con el actual escenario de las elecciones de los EE.UU., ¿hasta dónde ve usted progresos en ese sentido?</strong></li></ul>



<p>Creo que estas plataformas están tratando de parecer ocupadas, pero creo que han hecho muy pocas [acciones] significativas en el control de intervenciones altamente sospechosas, tanto por parte de populistas autoritarios, nacionalistas, partidos políticos de supremacía blanca como por agentes extranjeros que los apoyan y que generalmente siembran el caos, ¡y creo que esto es muy problemático! [Tales ejemplos representan] desafíos fundamentales a la idea de la autorregulación de estas plataformas. Como usted observa, hay algunas preocupaciones sobre la libertad de expresión, sin embargo, se trata de empresas privadas que utilizan las leyes de libre expresión para limitar la capacidad del gobierno de regularlas. Por lo tanto, tienen que asumir esa función reguladora para regir su propia expresión, o tienen que permitir que los reguladores gubernamentales asuman esa función (admitiendo que son empresas de transporte común). Hacer ninguna de las dos cosas es una receta para el caos y el descenso al autoritarismo (incluyendo las mentiras increíblemente dañinas que ahora difunde Trump sobre las elecciones en los Estados Unidos).</p>



<p>Espero que en el futuro veamos muchas más intervenciones de los gobiernos para mantener la integridad de las elecciones porque creo que hay enormes problemas con el hecho de que el Presidente de los Estados Unidos, el Presidente Trump, simplemente miente abiertamente y muchos de sus seguidores, varios en el partido republicano hacen lo mismo. Los ejemplos alrededor del mundo proliferan, tengo numerosos ejemplos en <strong>el capítulo 4 de mi libro <em>New Laws of Robotics</em> acerca de los medios automatizados</strong>. Creo que [esta situación es] increíblemente preocupante y que <strong>necesitamos ver a los gobiernos comenzar a imponer normas básicas de veracidad y decencia en las leyes contra el discurso del odio en estas plataformas.</strong></p>



<p>Y si no lo hacen, mi predicción es que el gobierno será tomado por la gente que usa esos trucos baratos de atracción política para apoderarse de la democracia. En otras palabras: o controlamos democráticamente la esfera pública, o permitimos que sea subvertida por demagogos que la controlarán de forma autoritaria.</p>



<p>Quiero decir, hemos visto que con respecto a los líderes autoritarios en todo el mundo, hay muchos ejemplos. Creo que el problema sólo se agrava hasta que haya gobiernos progresistas con alguna noción de juego limpio y decencia en cuanto a los llamamientos políticos, que intervengan enérgicamente para garantizar una esfera pública que sea verdaderamente respetuosa de la libertad de todos los ciudadanos y que no vaya a presentar los horrores de los esfuerzos por aterrorizar, dañar, estigmatizar sin fundamento o difundir mentiras sobre determinados partidos políticos, grupos étnicos minoritarios y otros grupos vulnerables. Mary Ann Franks, Carrie Goldberg, Danielle Citron, y K-Sue Park son brillantes en este frente &#8211; son líderes intelectuales de un movimiento para una mejor esfera pública.</p>



<p>Creo que con eso tenemos que pensar profundamente en este tipo de cuestiones, y creo que realmente tenemos que reformular las cosas debido a <strong>la facilidad con que los medios sociales permiten que se difundan mentiras completas, fabricaciones completas</strong>. Estamos aprendiendo más y más desde el 2016, aprendemos más y más sobre los requisitos previos a la democracia. N<strong>ecesitamos tener una población informada, no una que esté continuamente expuesta a las mentiras, la desinformación, la propaganda.</strong></p>



<ul class="wp-block-list"><li><strong>¿Cuáles son los principales desafíos que se interponen en el camino de la regulación de las plataformas y la regulación algorítmica? ¿Y cómo ve el futuro progreso en esta área?</strong></li></ul>



<p>Creo que el principal problema en esta área es que no se aprecia lo suficiente que la gobernabilidad ocurre. No es que podamos decir que vamos a desregular completamente las plataformas y no tener ninguna interferencia del gobierno en ellas, que entonces no hay gobernanza, y que prevalece la libertad completa. De hecho, la gobernanza sucederá, y la buena gobernanza es necesaria para la libertad.</p>



<p>También debemos reconocer que las personas sienten a menudo objetivamente la necesidad de estar en la plataforma y por lo tanto no tienen la opción de estar en ella o no. En esos entornos, <strong>la capacidad de los profesionales para intervenir y establecer algunas reglas es crucial. No estamos liberando a las personas al mantener al gobierno fuera. De hecho a menudo no estamos liberándolos para ser manipulados o marginados por una plataforma</strong>.</p>



<p>Sé que hay un contra-argumento fácil aquí, que sería, &#8220;usted acaba de llamar a ciertos líderes como autoritarios y ahora quiere que los gobiernos pongan reglas en mi vida, ¿qué pasa? ¡Quieres que los autoritarios hagan eso!&#8221; Mi respuesta es decir que, no, no quiero que los autoritarios hagan eso, pero <strong>sí quiero que los países que no son autoritarios reconozcan rápidamente lo fácil que es para los autoritarios aprovechar el entorno actual de la información</strong>, los entornos de las plataformas, y evitar que ese tipo de cosas ocurran allí. Eso creo que es el tema crítico.</p>



<p>Con respecto a otras cuestiones de la gobernanza de las plataformas, uno de los mayores problemas es que los gobiernos son demasiado lentos para tratar de redistribuir la recompensa de las plataformas. Si se observa la cantidad de ingresos de Facebook, Google y Amazon, etc., se trata de ingresos que podrían ser fácilmente redistribuidos a las empresas que estas plataformas están exprimiendo, incluyendo muchos medios de comunicación locales. Hay muchas maneras en las que podríamos redistribuir esos fondos. Creo que debemos pensar profundamente en eso porque ahora mismo esos fondos están concentrados principalmente en las manos de los accionistas y los altos directivos de estas empresas y creo que tenemos que pensar en qué tipo de poder les da esto y <strong>cómo podemos</strong> <strong>asegurarnos de que ese nivel de poder y riqueza no sea tan grande que sobrecargue los procesos democráticos</strong>. &nbsp;&nbsp;</p>
<p>La entrada <a href="https://milmagazine.org/interviews/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/">Frank Pasquale y sus &#8220;Nuevas leyes de la robótica&#8221;: &#8220;Los robots no deben falsificar las características humanas. La IA no debe intensificar las carreras de armas&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://milmagazine.org/interviews/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Some of Frank Pasquale&#8217;s &#8216;New Laws of Robotics&#8217;: &#8220;Robots should not fake human characteristics. AI should not intensify arms races&#8221;</title>
		<link>https://milmagazine.org/interviews/some-of-frank-pasquales-new-laws-of-robotics-robots-should-not-fake-human-characteristics-ai-should-not-intensify-arms-races/</link>
					<comments>https://milmagazine.org/interviews/some-of-frank-pasquales-new-laws-of-robotics-robots-should-not-fake-human-characteristics-ai-should-not-intensify-arms-races/#respond</comments>
		
		<dc:creator><![CDATA[Sally Tayie]]></dc:creator>
		<pubDate>Tue, 08 Dec 2020 18:00:02 +0000</pubDate>
				<category><![CDATA[Interviews]]></category>
		<guid isPermaLink="false">http://www.aikaeducacion.com/?p=12700</guid>

					<description><![CDATA[<p>Versión en español Movies such as I Robot or Ex Machina have featured a dystopian future of robots threatening humanity. We are now witnessing rapid massive transformations in Artificial Intelligence and its various applications in different fields. Is this threatening human obsolescence? Is the process of technology dissemination developing fairly? What about the unknown sides [&#8230;]</p>
<p>La entrada <a href="https://milmagazine.org/interviews/some-of-frank-pasquales-new-laws-of-robotics-robots-should-not-fake-human-characteristics-ai-should-not-intensify-arms-races/">Some of Frank Pasquale&#8217;s &#8216;New Laws of Robotics&#8217;: &#8220;Robots should not fake human characteristics. AI should not intensify arms races&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><a href="https://milmagazine.org/en-profundidad/frank-pasquale-y-sus-nuevas-leyes-de-la-robotica-los-robots-no-deben-falsificar-las-caracteristicas-humanas-la-ia-no-debe-intensificar-las-carreras-de-armas/">Versión en español</a></p>



<p>Movies such as <em>I Robot</em> or <em>Ex Machina </em>have featured a dystopian future of robots threatening humanity. We are now witnessing rapid massive transformations in Artificial Intelligence and its various applications in different fields. Is this threatening human obsolescence? Is the process of technology dissemination developing fairly? What about the unknown sides of data collection? What about regulating AI? How can robots truly serve humanity and to what extent? All these questions and more are addressed by <a href="https://www.brooklaw.edu/Contact-Us/Pasquale-Frank"><strong>Frank Pasquale</strong></a> who talks to AIKA about his new book &#8220;<a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674975224#:~:text=New%20Laws%20of%20Robotics%20makes,to%20answer%20these%20questions%20alone.&amp;text=Sober%20yet%20optimistic%2C%20New%20Laws,center%20of%20an%20inclusive%20economy.">New Laws of Robotics</a>&#8220;, AI in the age of COVID-19, AI regulation and more relevant aspects.</p>



<p>Frank Pasquale is an expert on the law of artificial intelligence, algorithms, and machine learning, and author of New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020). His widely cited book, The <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674970847">Black Box Society</a> (Harvard University Press, 2015), developed a social theory of reputation, search, and finance, and promoted pragmatic reforms to improve the information economy, including more vigorous enforcement of competition and consumer protection law. The Black Box Society has been reviewed in Science and Nature, published in several languages, and its fifth anniversary of publication has been marked with an international symposium in Big Data &amp; Society.</p>



<p>Pasquale is Professor of Law at Brooklyn Law School, an Affiliate Fellow at the Yale Information Society Project, and the Minderoo High Impact Distinguished Fellow at the AI Now Institute. He is also the Chairman of the Subcommittee on Privacy, Confidentiality, and Security of the National Committee on Vital and Health Statistics at the U.S. Department of Health and Human Services.</p>



<ul class="wp-block-list"><li><strong>In your new book &#8220;New Laws of Robotics&#8221;, you highlight the importance of creating teams or &#8220;partnerships&#8221; of humans and robots in the different fields; tell us more about the importance of such partnerships, the challenges that stand in the way of their success? What are possible solutions to overcome such challenges?</strong></li></ul>



<p>Let me give some examples from the first few chapters in the book. In medicine, there&#8217;s a really interesting set of partnerships that are developing between nurses and robots. One of these is the Robear, and that is a robot designed to help nurses to lift patients, especially heavier patients, from the bed. This is a really important innovation because a lot of nurses have orthopaedic problems because they are lifting extremely heavy patients or patients that are very vulnerable and need to be lifted very carefully. The robot is designed to enable a transfer of the patient from, say a bed to another bed, or a bed to a chair without demanding excessive physical exertion from nurses. So this I think is a very good example of what I call the <strong>first new law of robotics in my book; AI and robotic systems should complement professionals and not substitute for them</strong>. It is a relatively narrow and well-defined task, the nurse is always present, and brings in the robot to assist. I think we are going to see more and more of these examples of robots particularly in routinized tasks; for example, bringing drugs around hospitals.</p>



<p>Now, in terms of challenges, clearly there are challenges in terms of this type of AI; is it too costly? Does it get in the way? Can we include more and more tasks in a robot like Robear? There are going to be really interesting questions for the future. Some AI developers want to develop very extensive AI systems that are not just doing manual tasks but are also taking on roles like care, trying to offer something like empathy or to look empathetic. So you can imagine a robot that tries to look sad when a patient is in pain or look happy when a patient, say, takes a few steps beyond what they normally would.</p>



<p>That&#8217;s the situation where I think that the <strong>robot has gone beyond complementing a professional</strong> to substituting for them, and more importantly from my book&#8217;s perspective <strong>counterfeiting humanity</strong>: the robot is faking feeling. What I mean by that is the robot is <strong>mimicking human emotions, even though robots can’t actually feel human emotions. That is disservice</strong> to patients and to nurses, who as professionals are trained in terms of how to expressively connect with patients, and empathize with patients. Persons can authentically do that because they have experienced pain and disappointments of their own lives, and also joy and sense of accomplishment. A robot cannot do that.</p>



<p>So these <strong>first two laws of robots in my book are about: robots should complement professionals and not substitute for them, and that robots should not be mimicking or faking humanity</strong>. Mimicry, fakery and counterfeiting, are three terms that I use in the book and try to define very carefully because I think that the idea of counterfeiting has a resonance with the use of counterfeit money that’s appropriate here. My fear is that we have robots and AI systems that are faking human emotions and that are trying to claim our attention the way human beings can.&nbsp; That&#8217;s like living in an economy where bad money drives out good, where fake money once out in circulation reduces our faith in the value of existing money. <strong>So robots or machines faking human capacity and empathy, that will diminish our valuation of genuine human empathy,</strong> or we&#8217;ll be confused about when that actually exists, and when it is mere mimicry of humans, or, worse, human mimicking the mimicking of humans.</p>



<p>Other sorts of partnerships could be in the military, where there are lots of robotic systems being developed. There is something called the Octoroach, which is a robot that is designed to act like a roach but it has eight legs, thus the &#8216;octo&#8217; name for it. It could crawl into buildings and it could surveil things. There are drones that robotically could become autonomous, some would launch ballistic missiles once they launched a form of an autonomous killing machine. There are killer robots that automatically fire upon individuals. So, for example, in particularly contested war zones, you can have a robot, which controls a machine gun that senses someone coming via machine vision or machine hearing systems that sounds like an enemy or that looks like an enemy. Those are all examples that I think are troubling, primarily because they violate my <strong>third new law of robotics</strong>, which is that <strong>robotics and AI systems should not intensify arms races.</strong> I feel like this is a literal arms race, where once one nation has a fleet of killer autonomous drones, other countries are going to develop their own fleet of killer autonomous drones, and it sort of leads into forms of increasing investment in machinery that could have vastly destructive consequences. So, that to me is really a critical problem of AI and that is something we need to address.</p>



<p>The first law in the book, that a robot complements a professional, would dictate that any military robot be controlled by a person or be controllable by a person. But we also need to be sure we are in an environment where such control can be maintained. There are military futurists that worry that if we don’t join the military arms race, that becomes a threat to their ability to act fast enough to counteract an attack. From this concept was developed the idea of a push button war, and I am trying to help us to avoid that! I think that we do not want to live in a world where some great power, or an even lesser power, develops a robotic system that is so fast at attack that everyone else has to have robotic defences to countervail it …which in turn become targets for new forms of attack, etc.</p>



<p>So, that might be an example in response to the question of challenges that stand in the way of success of partnerships. It&#8217;s fascinating when you think about success in military context, and what is it? This was the hardest chapter to write in the book, because from the perspective of a military of a country, being successful means being so intimidating to other countries that they don&#8217;t even try to fight you. But that certainly cannot be defining success for the world! Success for the world has to refer to something that is more about multi-polarity; the idea that there are multiple poles of power in the world, not just a few countries with the power to intimidate or destroy everyone else.</p>



<ul class="wp-block-list"><li><strong>Tell us about your favourite use case when it comes to &#8220;partnering with technology&#8221;? Why do you see it as a successful example?</strong></li></ul>



<p><strong>My laws are in favour of partnering, as opposed to substituting robotics for professionals.</strong> I think that one of the most successful examples that I see happening right now in medicine, are examples of <strong>doctors who are pattern recognizers working with Artificial Intelligence experts to avoid errors</strong>. See, the idea here is that you still have dermatologists to look at skin abnormalities, say there&#8217;s a mole on someone&#8217;s hand, that dermatologist will examine that mole and have it diagnosed as a melanoma or not based on their existing expertise and their knowledge of the medical literature. But there is sometimes this fear at the back of their mind that what if it does not look like one but it might be one; like I have 80% sense that it is not but maybe it is. Is a biopsy in order? That may be painful and inconvenient for the patient.</p>



<p>I think that in the future, you are going to see more and more scans being done by machine vision systems that will give people a much better sense of how likely something &nbsp;is to be a cancer, to help specialists avoid errors. So that I think is a very important side of AI, that could do a lot to avoid error, and help a lot of people in dermatology, pathology, radiology; all those forms of pattern recognition, to help physicians feel confident that they won&#8217;t be sued for missing an unusual form of cancer. Just as AI is becoming much more successful in driving as a preventer of error, rather than as a driver itself. Of course, I expect that driving will eventually become totally automated, and do not expect to see the same result in medicine, precisely because it is only one information input in the professional field that needs to be understood and applied by a responsible professional.</p>



<p>Of course, there is the question of if the AI is also wrong! But in general, my hope is that these are going to be very powerful ways of ensuring less error in the medical system, which is good of course, because a lot of people die of medical errors. So we want to try to minimize this as much as possible. I think there was a medical report in 1998 that estimated over 90 thousands deaths in the US per year due to medical errors, and we haven&#8217;t really done that much better since then. So, the important question becomes how do we use the best technology to do better?</p>



<p>I also think <strong>that in education there are some good partnerships where you have robots that can teach children lessons that are not available from local teachers</strong>. And that&#8217;s particularly important when you think about younger children. For instance, imagine a family that wants its young child to learn Chinese, but doesn&#8217;t know Chinese, and neither does anyone around them. In this case you can have an interactive, entertaining, well designed robot to teach them Chinese. That&#8217;s really important as a new opportunity.</p>



<p>But I think that this is also going to be a situation where <strong>we don&#8217;t want to be substituting for the teachers</strong> themselves. Because personal interaction is <em>constitutive </em>of good teaching—there has to be a responsible person interacting with students, mediating between them and all the technology that could help or hurt, aid or parasitize, interest or distract them.</p>



<p>There is also a question of democracy, of maintaining diverse priorities and ways of knowing in society. Teachers are going to be teaching others subjects where they might have particular expertise or viewpoint and the ability to have a socially interactive model for all the students. But they can’t do it alone, as they certainly won&#8217;t know all the languages of the world or other things which might be more niche interests in math, arts, coding, culture, history, social sciences, etc. So, in all those areas AI and robotics can be incredible supplements, and also the teacher can stand as a quality evaluator in terms of helping students and their families know which AI is the most useful and which is less useful. So I think that&#8217;s an important example of partnerships with technology.</p>



<ul class="wp-block-list"><li><strong>In &#8220;New Laws of Robotics&#8221; book introduction, besides raising questions about making the best use of AI through &#8220;robotic and human interaction&#8221; within a regulated framework, you highlight the importance of democratizing decision making rather than leaving it in the hands of the powerful few. What do you mean by this? How can such democratization be achieved?</strong></li></ul>



<p>I am going to use the example I just used with the teaching robots; teachers and bots or AI that could be used to teach certain lessons. Imagine that we would go much further than what I just described and we encounter a society like the US with, say, 10,000 teachers in public high schools who teach American history. And then school boards say, “you know we are taxed way too much to support these 10,000 teachers of American history, so let&#8217;s just have a teaching robot that has one prescribed history course, and we are going to roll that out to all classrooms and it&#8217;s just going to be one history class for all the United States.” I think that would be a very troubling development, because I think that there are people in different schools, with different backgrounds, with different ideas about what&#8217;s important in history and what&#8217;s not, I want to see all of those people diversely teaching history throughout the country, I don’t want to have everyone receive the same version of history.</p>



<p>Of course you could say that we could try to program all of them into some robot that has 10,000 different types of teaching in it, but I still think that that would be missing <strong>the point of democratization because part of distributing power and expertise means that there are people who have some control over their daily life, their own part of the world</strong>. This idea of having control over some corner of life or some part of life, is one reason that so many more people support a job guarantee (or at least job-supporting government programs) than support universal basic income.</p>



<p>Going to work in any particular context gives you some level of control. Even when the boss is controlling, you still have a level of control or autonomy in your position and you have socialization; your socialization in the common project of the work place, as part of a larger group of workers. So, I think that that is part of the key to democratization in the future of automation policy; it&#8217;s that we want to have <strong>the ability for human beings to help govern their workplaces and govern the development of technology in them</strong>.</p>



<p>You see this idea in Elizabeth Anderson&#8217;s book &#8216;Private Government.’ She&#8217;s described the problem we have today that so many workplaces are ruled by bosses with an iron fist, you know, it&#8217;s like a dictatorship, and she says we need to democratize work places. I think she&#8217;s right about that but I also think she underestimates how, even in very controlled workplaces, there is still sort of room for autonomy. Even a night assistant manager at a drug store, for example, might have a chance to get to know other workers there, arrange the store in a certain way, try to find ways to deliver packages or otherwise accommodate a shut-in person, etc.</p>



<p>There are forms of feelings that one has in meaningful work, of participation in the world. It may exist only partially or imperfectly in many places. But that is also part of what the book addresses; that we should listen closely and try to understand the experiences of people in all walks of life. Moreover, law and policy can cultivate the feeling of (and reality of) participation, autonomy, and governance. <strong>When jobs include more technology, workers should have some say in how that technology is integrated</strong> and how it is designed.</p>



<div class="wp-block-image"><figure class="aligncenter size-large"><img decoding="async" width="667" height="1000" src="https://milmagazine.org/wp-content/uploads/2020/12/NLOR.jpg" alt="" class="wp-image-12701" srcset="https://milmagazine.org/wp-content/uploads/2020/12/NLOR.jpg 667w, https://milmagazine.org/wp-content/uploads/2020/12/NLOR-200x300.jpg 200w" sizes="(max-width: 667px) 100vw, 667px" /></figure></div>



<ul class="wp-block-list"><li><strong>Transparency is a term that is often used when addressing topics related to the ethical dimension of using Artificial Intelligence technologies in different fields. In your book &#8220;the black box society&#8221; you mention transparency as an essential beginning towards giving users control over the use of their data, however there is an argument that it is complex to achieve full algorithmic transparency for instance for economic reasons such as not preferring to reveal the used tech to competitors as well as the complexity of sharing technical information with non-technical users. In light of this, let me ask you:</strong></li></ul>



<p><strong>How would you define transparency in this context?</strong></p>



<p><strong>How do you believe that transparency is the most important ingredient to guarantee accountability?</strong></p>



<p><strong>How far is this area currently addressed in related laws and regulations? </strong><strong></strong></p>



<p>I think that, with respect to transparency, chapter 5 of <em>The Black Box Society</em> has a good chart that shows a spectrum of <strong>the <em>timing</em> of transparency and the <em>depth</em> of transparency</strong>. So, you can make something completely transparent immediately, or you can wait years and years to only make it partially transparent—and there are so many points in between those two poles. My argument with respect to trade secrecy, one dimension of the question here, is that even if trade secrets are valuable right now or for a few years, there must be some sort of disclosure. That&#8217;s the lesson of patent law is to disclose information and your right to exclude others from practicing it is protected.</p>



<p>Now, transparency with respect to time and scope is about process. The substance here includes <strong>transparency with respect to the data</strong>, <strong>the algorithms and their uses</strong>. Some say that it’s impossible to do much with AI and big data transparency once systems reach a certain level of complexity. But I have a piece on the LSE (London School of Economics) blog, that&#8217;s called <a href="http://eprints.lse.ac.uk/81263/1/Bittersweet%20Mysteries%20of%20Machine%20Learning%20%28A%20Provocation%29%20_%20LSE%20Media%20Policy%20Project.pdf">&#8216;Bittersweet Mysteries of Machine Learning</a>&#8216;, where I say that at a very minimum, even in systems where people say oh it&#8217;s completely unexplainable or I&#8217;ve no idea how it works it&#8217;s so complex, still, we should be able to demand what the sources of data&nbsp; are, what data is being fed into the system and what are the outputs.</p>



<p>There may be a black box in the middle, but <strong>we still deserve to know what data is going in and what inferences or data are coming out</strong>. Now, when it comes to the centre of things, I think that one of the issues there is that, if we are again dealing with something that is so complex that there is just no way to narratively describe it, to explain it in algorithms&nbsp; that are comprehensible by human beings, then we need to think very deeply about all the ways in which we do not want to allow that sort of AI to affect human beings&#8217; life chances, how we don&#8217;t want it to affect classifications, rankings and evaluations of people.</p>



<p>We don&#8217;t want that because essentially <strong>we have laws that state that certain ways of gathering data about persons (and using that data to judge them) are illegal, and if we can&#8217;t understand what data was used, or how it was used, then we can&#8217;t know that it doesn&#8217;t violate those laws</strong>. We now have so many books and so much research on the various categories of transparency and machine learning that show the biases in algorithmic systems. Virginia Eubanks, Safiya Noble, Ruha Benjamin, Andrew Ferguson, Ari Ezra Waldman, Margaret Hu [and others], there are just so many scholars that have exposed these problems that it is no longer safe to trust black box systems with controversial human classifications.</p>



<p>In terms of how this is currently addressed, the General Data Protection Regulations (GDPR) is one thing that limits non transparent profiling in various ways, and in the US there are some privacy and financial laws that also protect individual rights and important social values in this area. I think we also have to go much further, because [the answer here] will be requiring certain ways of completing ranking, ratings, and evaluations of persons; that&#8217;s the only thing done on the basis of articulable criteria; it has to be done in an articulable way in order to maintain our standards of fairness in the process. If you don&#8217;t do that then you get rid of those standards; those standards are inextricably intertwined with language as the core of law, not algorithms and not computational evaluations.</p>



<p>For the question, whether I believe that transparency is the most important ingredient to guarantee accountability, no, I think that ultimately it&#8217;s an ingredient but that there are many forms of accountability out there that would involve, for example, post hoc analysis of these systems in order to audit their impact on groups. In addition, we may rightly say that in certain instances, more narratively intelligible explanations, or simpler and transparent standards (applied with discretion and flexibility) should replace machine learning, AI, or algorithms. &nbsp;I also think that in terms of accountability, more accountable systems would involve showing people how the algorithmic world works, which is one step towards legitimacy. But this always has to be tested against other potential non-algorithmic ways of ordering those affairs.</p>



<ul class="wp-block-list"><li><strong>The use of AI to provide tech solutions to the current COVID-19 crisis has been witnessed in different countries through governments&#8217; applications to track COVID cases. Such initiatives have been met with doubts from many who are mainly concerned about privacy, access to data on mobile phones and government surveillance, what do you think about these applications? How far do you believe such reactions of doubts and concerns are justified considering that the same users can have other applications which collect data all the time?</strong></li></ul>



<p>Excellent last point! I&#8217;ll start with the last point to say that, if you have concerns about privacy and data collection of COVID tracking apps, the next important step to articulate those concerns is <strong>to explain exactly what is the margin of lost privacy beyond already existing laws of privacy</strong>, given that the person involved is usually already using other forms of technology? Of course there may be persons who &nbsp;are not using cell phones at all. But for those who have these sorts of systems on board, the marginal loss is something that has to be calculated.</p>



<p>With respect to the apps themselves, what I have seen is that <strong>there are places where these apps seem doomed to fail and there are places in which they seem to have played a role in excellent pandemic response</strong>. Let me start with the successful countries. Some of the literature on South Korea, in English language journals and other venues, indicates that in the wave of the 2016 MERS epidemic, South Korea amended its privacy laws in a way to ensure rapid coordination and collection of data to inform the Korean authorities about exactly where everyone was moving; whether they had just come into the country or who have exposure to someone with COVID case. If you look at the success of South Korea at tracking clusters of the disease from rapidly understanding exactly where a person who seems to be a super spreader has moved and to quickly identify those individuals to put them in quarantine, to support them in quarantine, to know where they were in quarantine. All of those things are way in favour of exceptionally broad and comprehensive data collection for a very narrow purpose, which is in public health.</p>



<p>And that to me suggests that these <strong>COVID tracking apps could play a very important role, particularly at the start of epidemics</strong>. If we come to contrast the South Korean example with introducing the COVID tracking apps in, let&#8217;s say, in Europe or as in the UK right now, there is a clear distinction: the EU/UK/US governments have not marshalled the serious resolve and state capacity that South Korea and Taiwan did. So in a sense they don’t deserve (as much as the more capable states) to have access to the relevant data. If they were more competent, the equities might be different.</p>



<p>My question is how would the tracking app lead to better allocation of resources in a reasonable way in the EU/UK/US? And I have my doubts! I think that <strong>there are just so many people now who have the illness that it’s very hard to imagine the tracking app being very effective at helping us to better understand the spread</strong>. There are also problems in terms of, people could be in one place, but can&#8217;t see each other because there&#8217;s a wall between them; they work in different departments, but the tracking app will inform on the person who was not infected or have no exposure to the infected person because of the location in the same building. Those are sorts of failure that could occur.</p>



<p>It&#8217;s also a huge data collection issue, and differential. When it’s early, or things are well controlled, the results of data collection only impinge on the lives of relatively few people (who must quarantine). Later, if you&#8217;re in the middle of a pandemic, suddenly you&#8217;re talking about collecting data with consequences for thousands or millions of people, with respect to the COVID exposure. &nbsp;That risks possible discriminatory data use, and worse.</p>



<p>In general: AI-enhanced public health surveillance is a good way of helping a competent and rapidly acting public health authority to rapidly stop a pandemic and nip it in the bud. Moreover, <strong>the invasion of privacy entailed by that sort of capacity for and triggering of on-going location tracking of everyone &nbsp;for public health purposes, that restriction of liberty is far less harmful to liberties than what you see happening when a pandemic gets out of control</strong>. I realize that that&#8217;s going to be controversial as a model for the future of pandemic control, but I think if you look at how Korea did it, it&#8217;s essential, also essential to the Chinese response, but I am not so sure about Vietnam or Taiwan (which relied a lot on border control). Nevertheless, I look at the freedoms now enjoyed for many months in those countries (from fear of deadly disease), and the small sacrifice they made at the beginning to achieve it—and I feel that they are in fact much more free in a sense than the liberal democracies haplessly “advising” citizens to stay home, stay safe, etc.</p>



<p>Of course, <strong>in thinking comparatively, there may be different paths out of the crisis</strong>; so maybe there&#8217;s an Australian path that&#8217;s about an extremely strict lockdown for an extended period of time, then you could say that there is also the South Korean high tech surveillance approach and in China there seems to be a sort of combination between those two, that is, the long closure of borders and the very close tracking of everyone. But, I think what is beyond question is that this is the time that the US, the EU, South America and Central America really think deeply about what did the successful countries do right, because, you know&nbsp; this is a world historical problem. The death and illness are horrifying. And their effects will not end when the pandemic ends (<em>if </em>it ends—incompetent handling of it has now effectively created the opportunity for mutant forms, like we saw in the Danish mink farms, to arise, and perhaps defeat or reduce the effectiveness of vaccines). For example, because of the loss of economic growth and opportunity over the past just seven months, every single American household is predicted to lose (on average) $125,000 dollars in future earnings. And what goes without saying is the enormous suffering and loss of lives.</p>



<ul class="wp-block-list"><li><strong>Nowadays with the current situation witnessed by the whole world, there is significantly higher dependence on technological platforms in many fields. For example, we see platforms such as Google dominating when it comes to distant learning, Zoom rising for business and working from home, and Facebook and Twitter maintaining their role of providing quick bites of information and updates accentuating platform capitalism. What benefits and risks do you see in this process?</strong></li></ul>



<p>I think it&#8217;s an extremely risky process! This is giving enormous power to largely American companies to have global reach (and also some Chinese firms), and I don&#8217;t trust many of these mega-firms. <strong>By contrast, I think nations around the world need to develop more forms of technological sovereignty.</strong></p>



<p>Also, governance needs to be distributed. Distributed governance in education involves teachers mediating between technology and students, instead of direct interaction from technology to students. I think that something very similar should be applied to platforms. I would ideally like to see, in each country, multiple search engines, and multiple social networks (with APIs for interoperability, of course, and data sharing for the search engines). I hope to see that on the horizon because what we suffer now is just an enormous situation of power concentration.</p>



<p>I also hope that we see more break-ups of mega-firms. For example it&#8217;s ridiculous that Facebook, WhatsApp and Instagram are controlled by a single company led by one man with exceptional influence over its board, management, and users. He’s basically an emperor, as I suggest in my work on “<a href="https://www.opendemocracy.net/en/digitaliberties/from-territorial-to-functional-sovereignty-case-of-amazon/">functional sovereignty</a>.” I mean, in multinational corporations, CEOs have the power to choose many important people on their board over time, and the board is said to run the company but if the CEO has chosen the board and can knock people off the board, then who&#8217;s really in charge? And in these big tech firms, the CEO&nbsp; is often even more powerful than the average corporate CEO. I think that with these CEOs having all this power, there has to be a look at breaking up these firms. I mean, break apart Google and YouTube, break apart Facebook, WhatsApp and Instagram, there are many ways to do this that Lina Khan, Elizabeth Warren, Sally Hubbard, Stacy Mitchell, Tim Wu, and others have proposed, and I think we should. &nbsp;</p>



<ul class="wp-block-list"><li><strong>The year 2016 has been regarded as a turning point marking the beginning of a tangible impact of oligarchy through engineering and reshaping the public sphere and manipulating the public opinion. For example, Brexit and the US elections were two events that reflected the impact of Facebook on politics. After the users&#8217; privacy invasion scandals and attempts to regulate such platforms, with the current US elections scene how far do you see progress in that sense?</strong></li></ul>



<p>I think that these platforms are trying to look busy, but I believe that they had made very few significant [actions] in controlling highly suspected interventions, both by authoritarian populist, nationalists, white supremacist political parties and by foreign agents that support them and that generally sow chaos, and I think that this is very problematic! [Such examples represent] fundamental challenges to the idea of self-regulation by these platforms. As you note, there are some concerns about free expression, however these are private companies who use free expression laws to limit the government&#8217;s ability to regulate them. So they, therefore, have to take that regulatory role to govern their own speech, or they have to allow government regulators to take on that role (by admitting they are common carriers). Doing neither is a recipe for chaos and a descent into authoritarianism (including the incredibly damaging lies now spread by Trump about US elections).</p>



<p>I hope that in the future we see a lot more interventions by governments to maintain the integrity of elections because I think that there are enormous problems with the fact that the President of the United States, President Trump, just outright lies and many of his followers, many in the Republican party do the same thing. Examples around the world proliferate, I have many examples on <strong>chapter 4 of my book <em>New Laws of Robotics </em>about automated media</strong>. I think that [this situation is] incredibly troubling and that <strong>we need to see governments start to impose basic standards of truthfulness and decency in anti-hate speech laws on these platforms</strong>.</p>



<p>And if they don&#8217;t, my prediction is that the government will get taken over by the people who use those cheap tricks of political appeal to take over democracy. In other words: we either democratically control the public sphere, or we allow it to be subverted by demagogues who will control it in an authoritarian way.</p>



<p>I mean, we’ve seen that with respect to authoritarian leaders around the world, there are so many examples. I think that the problem only gets worse until you have progressive governments with some notion of fair play and decency in terms of political appeals, strongly intervening to ensure a public sphere that is truly respectful of the liberty of all citizens and that is not going to feature the horrors of efforts to terrorize, harm, baselessly stigmatize or spread lies about certain political parties, minority ethnic groups, and other vulnerable groups. Mary Ann Franks, Carrie Goldberg, Danielle Citron, and K-Sue Park are brilliant on this front—they are intellectual leaders of a movement for a better public sphere.</p>



<p>&nbsp;I think that with that we really have to think deeply about this type of issue, and I think that we really have to reframe things because of <strong>how easily social media enables complete lies, complete fabrications to spread</strong>. We are learning more and more since 2016, we learn more and more about the prerequisites to democracy; <strong>we need to have an informed populace, not one that is continuously being exposed to lies, disinformation, propaganda.</strong></p>



<ul class="wp-block-list"><li><strong>What are the main challenges standing in the way of platforms and algorithmic regulation? And how do you see future progress in this area?</strong></li></ul>



<p>I think the main problem in this area is that there is not enough appreciation that governance happens. It&#8217;s not as if we can ever just say we are going to completely deregulate platforms and not having any government interference in them, that then there is no governance, and complete freedom prevails. In fact, governance will happen, and good governance is necessary to freedom.</p>



<p>We also must recognize that people feel often objectively a need to be on the platform and therefore they don&#8217;t have a choice to be on it or not. In those environments <strong>the ability of professionals to intervene and set some rules is crucial.</strong> <strong>We&#8217;re not freeing people by keeping government out, in fact we&#8217;re often un-freeing them to be manipulated or marginalized by a platform</strong>.</p>



<p>I know that there is an easy counter-argument here, which would be, “you just called out certain leaders as authoritarian and now you want governments to put rules on my life, what gives? You want authoritarians to do that!” My answer is to say that, no I don&#8217;t want authoritarians to do that but <strong>I do want &nbsp;countries that are not authoritarian to quickly recognize how easy it is for authoritarians to take advantage of the current information environment</strong>, the platforms environments, and to stop that kind of thing from happening there. That I think is the critical issue.</p>



<p>With respect to other issues of platform governance, one of the biggest problems is that governments are too slow to try to redistribute the bounty from platforms. If you look at the amount of revenue of Facebook, Google and Amazon, etc., that is revenue that could easily be redistributed to the firms these platforms are squeezing out of business, including much local media. There are many ways in which we could redistribute such funds. I think we should think deeply about that because right now those funds are primarily concentrated in the hands of shareholders and top managers of these firms and I think that we have to think about what sort of power this gives them and how can we <strong>ensure that that level of power and wealth doesn&#8217;t get so great that it overwhelms democratic processes</strong>. &nbsp;&nbsp;</p>
<p>La entrada <a href="https://milmagazine.org/interviews/some-of-frank-pasquales-new-laws-of-robotics-robots-should-not-fake-human-characteristics-ai-should-not-intensify-arms-races/">Some of Frank Pasquale&#8217;s &#8216;New Laws of Robotics&#8217;: &#8220;Robots should not fake human characteristics. AI should not intensify arms races&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://milmagazine.org/interviews/some-of-frank-pasquales-new-laws-of-robotics-robots-should-not-fake-human-characteristics-ai-should-not-intensify-arms-races/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Renee Hobbs: &#8220;Whether propaganda is beneficial or harmful is in the eye of the beholder&#8221;</title>
		<link>https://milmagazine.org/interviews/renee-hobbs-whether-propaganda-is-beneficial-or-harmful-is-the-eye-of-the-beholder/</link>
					<comments>https://milmagazine.org/interviews/renee-hobbs-whether-propaganda-is-beneficial-or-harmful-is-the-eye-of-the-beholder/#respond</comments>
		
		<dc:creator><![CDATA[Sally Tayie]]></dc:creator>
		<pubDate>Thu, 29 Oct 2020 13:03:29 +0000</pubDate>
				<category><![CDATA[Interviews]]></category>
		<guid isPermaLink="false">http://www.aikaeducacion.com/?p=12037</guid>

					<description><![CDATA[<p>In times where the whole world is witnessing unprecedented changes in the different aspects of daily life, it is important to have the knowledge and skills to cope positively. In that sense, media literacy can be a mindset and a lifestyle rather than a curriculum. Renee Hobbs is an internationally-recognized authority on digital and media [&#8230;]</p>
<p>La entrada <a href="https://milmagazine.org/interviews/renee-hobbs-whether-propaganda-is-beneficial-or-harmful-is-the-eye-of-the-beholder/">Renee Hobbs: &#8220;Whether propaganda is beneficial or harmful is in the eye of the beholder&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In times where the whole world is witnessing unprecedented changes in the different aspects of daily life, it is important to have the knowledge and skills to cope positively. In that sense, media literacy can be a mindset and a lifestyle rather than a curriculum.</p>



<p><a href="https://mediaeducationlab.com/about/renee-hobbs">Renee Hobbs</a> is an internationally-recognized authority on digital and media literacy education. Through community and global service and as a researcher, teacher, advocate and media professional, Hobbs has worked to advance the quality of digital and media literacy education in the United States and around the world. She is Founder and Director of the Media Education Lab, whose mission is to improve the quality of media literacy education through research and community service<a href="#_edn1">[i]</a>.</p>



<p>In this fruitful discussion with AIKA, Renee Hobbs explains to us how media literacy can be understood and applied in today&#8217;s context covering various dimensions. &nbsp;Hobbs also talks to us about her recently published book <strong>&#8216;Mind over Media: Propaganda Education for a Digital Age</strong>&#8216; where she breaks down the true meaning of propaganda from a modern perspective and the role of media literacy. We also talk to Hobbs about her upcoming book &#8216;<strong>Media Literacy in Action</strong>&#8216; which is designed to help youth to have an in-depth understanding of the concepts of media literacy, empowering them through knowledge and critical skills.</p>



<hr class="wp-block-separator"/>



<iframe loading="lazy" width="100%" height="300" scrolling="no" frameborder="no" allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/919756594&#038;color=%23ff5500&#038;auto_play=false&#038;hide_related=false&#038;show_comments=true&#038;show_user=true&#038;show_reposts=false&#038;show_teaser=true&#038;visual=true"></iframe><div style="font-size: 10px; color: #cccccc;line-break: anywhere;word-break: normal;overflow: hidden;white-space: nowrap;text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif;font-weight: 100;"><a href="https://soundcloud.com/aika-educacion" title="Aika Educación" target="_blank" style="color: #cccccc; text-decoration: none;" rel="noopener noreferrer">Aika Educación</a> · <a href="https://soundcloud.com/aika-educacion/renee-hobbs-aika-educacion" title="Renee Hobbs - Aika Educación" target="_blank" style="color: #cccccc; text-decoration: none;" rel="noopener noreferrer">Renee Hobbs &#8211; Aika Educación</a></div>



<p><a href="#_ednref1">[i]</a> Biography from <a href="https://mediaeducationlab.com/">https://mediaeducationlab.com/</a></p>
<p>La entrada <a href="https://milmagazine.org/interviews/renee-hobbs-whether-propaganda-is-beneficial-or-harmful-is-the-eye-of-the-beholder/">Renee Hobbs: &#8220;Whether propaganda is beneficial or harmful is in the eye of the beholder&#8221;</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://milmagazine.org/interviews/renee-hobbs-whether-propaganda-is-beneficial-or-harmful-is-the-eye-of-the-beholder/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Teenage journalists reveal the untold stories of how their peers worldwide are making a difference during the pandemic, Aralynn McMane tells us about it</title>
		<link>https://milmagazine.org/interviews/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/</link>
					<comments>https://milmagazine.org/interviews/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/#respond</comments>
		
		<dc:creator><![CDATA[Sally Tayie]]></dc:creator>
		<pubDate>Fri, 23 Oct 2020 11:43:07 +0000</pubDate>
				<category><![CDATA[Interviews]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[interview]]></category>
		<category><![CDATA[pandemic]]></category>
		<guid isPermaLink="false">http://www.aikaeducacion.com/?p=11923</guid>

					<description><![CDATA[<p>With humanity transitioning to a new era – post COVID-19 – it is crucial to empower and involve youth, making them an active part of such transition. For the&#160;World Teenage Reporting Project&#62;COVID 19, teenage journalists in 19 countries covered 60 of the untold stories about how their peers were helping during the pandemic. The project [&#8230;]</p>
<p>La entrada <a href="https://milmagazine.org/interviews/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/">Teenage journalists reveal the untold stories of how their peers worldwide are making a difference during the pandemic, Aralynn McMane tells us about it</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>With humanity transitioning to a new era – post COVID-19 – it is crucial to empower and involve youth, making them an active part of such transition. For the<strong>&nbsp;</strong><a href="https://www.globalyouthandnewsmediaprize.net/project-world-teenage-reporting-pro">World Teenage Reporting Project&gt;COVID 19</a>, teenage journalists in 19 countries covered 60 of the untold stories about how their peers were helping during the pandemic. The project demonstrates the power of youth in connecting societies while having an active role in times of crises. It was organized by Global Youth &amp; News Media, a French nonprofit directed by Dr.&nbsp;<a href="https://www.linkedin.com/in/aralynnmcmane/">Aralynn Abare McMane</a>.</p>



<p>Aralynn Abare McMane, founding director of Global Youth &amp; News Media, is based in France and has worked in about 75 countries to persuade news media to pay better attention to young people and to help them understand journalism. She has been a reporter and editor, a journalism educator and researcher, and a media development program leader.&nbsp;In this interview, McMane talks to us about the details and impact of the project.&nbsp;</p>



<p><strong>The&nbsp;</strong>&nbsp;<strong>World Teenage Reporting Project presents an interesting proactive approach to covering stories from around the world during the pandemic. What prompted you to do this?</strong><strong></strong></p>



<p>Three things, actually: capacity, an inspirational model and getting annoyed.</p>



<p>I was bugged by what seemed to be the prevailing image of teenagers and twenty-somethings as either careless beach-goers who were bringing the virus home to Grandma or powerless kids stuck at home whining about their boredom. Later they got some sympathy for missing teenage milestones, but they were still portrayed as mostly useless.</p>



<p>Also, I had the capacity to organize the project because I decided to postpone the 2020 Global Youth &amp; News Media Prize competition, which I run with Jo Weir of London, to 2021.That provided some spare time to do something new (from confinement like everybody else).&nbsp;</p>



<p>Meanwhile, in the course of writing a story about what editors of news for children were doing around the epidemic, I came across a lot of examples of testimonials from children and teenagers about how it was going for them in confinement. PBS NewsHour Student Reporting Labs did that as well but also provided tools for teenagers to go beyond their own experience and do real journalism. That made me realize what I could do as I had previously organized global youth journalism projects around interviewing inspirational people and taking over newsrooms for a day. Also, I knew that&nbsp;<strong>stories about helpers contributing to the solutions of a situation give hope</strong>&nbsp;about that situation.&nbsp;</p>



<p>I simply combined it all. Don’t get me wrong. Testimonials about confinement are useful as is the tough news we continue to need to face about this virus, but I think it’s important to hear also, loud and clear, about how young people are contributing.</p>



<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img loading="lazy" decoding="async" src="https://milmagazine.org/wp-content/uploads/2020/10/Imagen-1.jpg" alt="" class="wp-image-11924" width="388" height="304" srcset="https://milmagazine.org/wp-content/uploads/2020/10/Imagen-1.jpg 540w, https://milmagazine.org/wp-content/uploads/2020/10/Imagen-1-300x234.jpg 300w" sizes="auto, (max-width: 388px) 100vw, 388px" /></figure></div>



<p></p>



<p><strong>Why ask the teenage journalists to tell stories about someone else and not about their personal experience?</strong></p>



<p>Doing a story about someone else is real reporting, a harder, more powerful activity that really teaches about how content created with a journalistic approach is different from other kinds of content. It takes a higher level of paying attention.&nbsp;&nbsp;It’s also a learning exercise, as all partner organizations had professional editors or mentors on hand to help.</p>



<p>&nbsp;<strong>Tell us about the process of recruiting teenagers to become part of the project and the role of the different partners involved to facilitate the development of the project.</strong><strong></strong></p>



<p>It wasn’t complicated. I first contacted people around the world who I know already and who worked with teenage journalists. They were quick to join so I was reassured that the project made sense. It just grew organically from there. The partners were simply people who had direct contact with teenage reporters. I ended up working with a few teenagers myself, but the success of the operation depended fully on those&nbsp;<strong>editors and advisers who had fully dedicated themselves to helping young people</strong>learn how to do professional reporting.</p>



<p><strong>Did you have selection criteria, and how were the selected young reporters prepared; were they given training?</strong></p>



<p>&nbsp;My selection criteria were basically, “Yes.” All I needed to know was that teenage news organizations understood the mandate: stories about how other teenagers were making a difference. They already knew the job. I was not there to edit. I was there to amplify. I understand that this is a very&nbsp;<strong>different approach from a more elaborate program that would involve journalistic training</strong>&nbsp;because I have also done that. In this case, that was not the goal.</p>



<p><strong>Why did you think it would work?</strong><strong></strong></p>



<p>I’ve done some other global projects around news media and young people and found that both young people and their adult mentors respond well.</p>



<p>While at the World Association of Newspapers and News Publishers, I did a World Teenage News Takeover that encouraged&nbsp;<strong>newsrooms to turn over part of the news or commentary to teenagers</strong>&nbsp;(not a special section; the normal content), and a My Dream Interview Festival that encouraged teams to prepare interviews for adults they admired. Materials for doing both are still online and still viable, I think.&nbsp;</p>



<p>&nbsp;I have consistently found that the news staff who do that kind of work with young people are always ready to try something new, especially when it can be global and give them some fresh insights and approaches.&nbsp;</p>



<p><strong>What was the risk and what were the challenges&nbsp;</strong>y<strong>ou faced during the different phases of the project?</strong><strong></strong></p>



<p>I was worried that it would be considered too hard so nobody would do it. Once I began to contact people who I knew cared both about journalism and young people, I was reassured by the enthusiastic response. Frankly, it went very smoothly from there, with me just coordinating, doing showcases of the work and social media promotion to make their stories better known. I’m especially grateful to the New York Times Learning Network, whose promotion for the project as part of their annual podcast competition yielded some great content that would never have otherwise emerged.</p>



<p><strong>Besides the COVID-19 pandemic, we are also suffering from an &#8216;infodemic&#8217; with the spread of false information and misleading conspiracy theories. How far do you think this project helps in promoting awareness, quality journalism and in reinforcing Media and News Literacy among youth?</strong><strong></strong></p>



<p>I have long believed and strongly advocated the stance that&nbsp;<strong>doing journalism is a very powerful tool for reinforcing media and news literacy</strong>. Once you have been in the position of a reporter, having to decide what is real and verifiable enough to share with an audience, you never really look at “information” the same way. It’s not about becoming a cynic about all content. It’s about&nbsp;<strong>learning to always check the source</strong>&nbsp;<strong>of the original “information”</strong>&nbsp;and then look at multiple other sources, preferably primary sources, to confirm or discredit what you originally thought was true. [Learning about journalism is also reinforced by learning about the horrific price some people have paid just to do that job, but that’s another discussion.]



<p>This project was also about validating a<strong>&nbsp;solutions journalism approach</strong>&nbsp;&#8212; looking at how covering the helpers can be useful in counteracting a stereotype and finding ways to solve the problem at hand.</p>



<p><strong>Can you describe the actual impact this experience had on the participating teenagers on the different levels; personal, social, educational? Do you think such experiences help youth to cope better with challenging situations like the one we are all currently living?&nbsp;</strong></p>



<p><strong>&nbsp;</strong>It’s pretty standard psychology that if you can contribute during a crisis, you’ll be in better condition than people who can’t do so. I also heard from some of the advisors to the teen journalists in the project who said much the same thing. For example, Melissa Falkowski advisor at The Eagle Eye in Parkland, Florida (USA), where 17 students and staff were gunned down in 2018, was the first on board for the project. She told me that “for student journalists that are stuck at home, this project gives them something to do. In my experience with trauma, having something to do and&nbsp;<strong>the ability to write about stories related to the trauma you have experienced or are experiencing can be very healing</strong>.”</p>



<p><strong>&nbsp;Is there going to be another part for this project in the future?</strong><strong></strong></p>



<p>This will sound contradictory, so please bear with me: In some ways, this was a unique, “unicorn” project, but I think there is potential for another edition. Rarely in human history has the whole planet experienced the same phenomenon and, at the same time, had the capacity to report about it. That was the case here. COVID-19 was everywhere. Teenagers everywhere &#8212; both journalists and protagonists &#8212; had few distractions as schools were either closed or on reduced, online schedules. That made it a truly unique time. But I think there may be a reason to do another edition. I am exploring doing it again, possibly around inspirational change makers, but this time I need a partner, and there may be a strong argument for some training. Also, I want to explore with someone a way to use the stories in the COVID-19 edition [and any further edition] as a classroom resource. Not an easy assignment.</p>



<p>___________________________________________________________________________________</p>



<p>GLOBAL YOUTH &amp; NEWS MEDIA</p>



<p>&gt; Organizes international initiatives to advance the interaction of youth and news media, most recently the<a href="https://www.globalyouthandnewsmediaprize.net/project-world-teenage-reporting-pro">&nbsp;</a><a href="https://www.globalyouthandnewsmediaprize.net/project-world-teenage-reporting-pro">World Teenage Reporting Project &gt;&nbsp;</a><a href="https://www.globalyouthandnewsmediaprize.net/project-world-teenage-reporting-pro">COVID-19</a>.&gt; Awards an annual<a href="http://www.globalyouthandnewsmediaprize.net/">&nbsp;</a><a href="http://www.globalyouthandnewsmediaprize.net/">Prize</a>&nbsp;in partnership with<a href="https://ejc.net/about">&nbsp;</a><a href="https://ejc.net/about">The European Journalism Center</a>,<a href="https://news-decoder.com/about-us/">&nbsp;</a><a href="https://news-decoder.com/about-us/">News-Decoder,</a>&nbsp;and<a href="https://newsinitiative.withgoogle.com/">&nbsp;</a><a href="https://newsinitiative.withgoogle.com/">The Google News Initiative</a></p>
<p>La entrada <a href="https://milmagazine.org/interviews/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/">Teenage journalists reveal the untold stories of how their peers worldwide are making a difference during the pandemic, Aralynn McMane tells us about it</a> se publicó primero en <a href="https://milmagazine.org">MIL Magazine</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://milmagazine.org/interviews/teenage-journalists-reveal-the-untold-stories-of-how-their-peers-worldwide-are-making-a-difference-during-the-pandemic-aralynn-mcmane-tells-us-about-it/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
