Blog Post

It Is a Busy Time for Tech & Ethics

Surveillance Graffiti - camera watching child

Recently we have seen a lot of bad news about digital technologies, social media platforms, and data breeches. A part of me hesitates to call it 'news' since most of what's now part of public/popular knowledge has been known to those who work on social technology and information technology research for some time (as in a decade or so). Nevertheless, there is a steady beat of headlines on data being leaked, hacked, or just plain old sold these days. 

Here's some good news: there are several new research efforts underway.* 

Susan Benesch and J. Nathan Matias announced the launch of their study to diminish Twitter abuse. I have had the pleasure of getting to know both scholars through a network that works to address online harassment and I am eager to see how their hypothesis that social norms can help to transform online behavior holds up in current U.S. media ecologies. I'm also very keen to learn from their framework for ethical research on and with a public platform. I appreciate that Benesch and Matias have put a lot of thought into the financial, legal, and ethical paradigms for their study.

In a related effort, the Second Workshop on Abusive Language Online is scheduled to take place at the EMNLP (Empirical Methods in Natural Language Processing) Conference in Brussels in October 2018. I am a part of the organizing committee for this event, which is impeccably led by Zeerak Waseem, and a participant from the first workshop held last year (you can see the Proceedings here - they give a hint of the really rich conversations we had over the course of a full day). I'm particularly excited that ALW2 continues the tradition of bringing media and digital studies and critical race studies scholars together with those working in natural language processing and information studies. 

Also this week, Facebook announced the launch of a new independent Elections Initiative in partnership with the Social Science Research Council designed to give researchers access to proprietary data in order to better understand the impacts of FB on elections. Take Facebook's effort here with a grain of salt (and a dash of too late), but I can attest to the difficulty in getting access to proprietary systems for research purposes. I hope this helps address the black boxing of social media tech. Like the Twitter partnership, I also hope that SSRC and several independent funding group's involvement helps to ensure the independent and non-exploitative research we need. See more: SSRC President, Dr. Alondra Nelson's statement regarding the launch. If you can stomach Zuck's "I feel bad" rhetoric, you can read more about the initiative in this Atlantic interview. Want more? Check out Cathy Davidson's post on the effort as well! 

Finally - if you're in the Seattle area 4/10, considering checking out this great panel: Consent in the Time of Data Mining (tomorrow's panel at Ethical Research for Human Centered Design in Seattle)

* caveat: I worry about "research." As women of color have taught me, too often the veneer of academic research is cover for work that reproduces colonial and white supremacist structures that pervade dominant cultures, including those of research universities and initiatives. So I'll be watching all of this research carefully. I'm glad that there's attention to independence from the social media companies, but that doesn't mean guarantee that the research advances the cause of JUSTICE along with that of ethics (check out the work of the Center for Media Justice and this piece by Malkia Cyril and Karlos Schmeider on why we need media justice). Nor does independence from corporations ensure that those with privilege won't steal the work of others, as Moya Bailey and Trudy point out in their piece on the erasure of black women and femmes in social media work. If you're a white scholar working in social media spaces and you wonder if you need to do better, consider consulting the Respect and Power-and-Control Wheels created by the Digital Alchemists.

Image credit: Sam Felder, CC BY-SA 2.0,



At San Jose State University, we're considering some of these dilemmas with a one-day gathering (2 panels + a student poster session + a keynote). Though we're focusing on robots and AI, the conversation among the technologists and scholars in the local Silicon Valley, I'm sure, will broach the issues surrounding the imposition of tech -- we'll certainly be discussing ethics. The tag line for our university is "powering Silicon Valley," and our Engineering College supplies a lot of workers for the local companies. We have long struggled to establish a working relationship with these companies; even Stanford has a tertiary relationship with them along the lines of a mentoring program between their students and tech magnates. There's little conversation about ethics (either before or after technology innovations are built). 

We're really hoping that the geographical proximity of our meeting to Silicon will embolden everyone to participate in the conversation (without turning it into a finger-wagging at tech companies). We're hopeful...I'm hopeful....that the meeting isn't simply an academic exercise. 

The event is free and open to the public. If anyone from HASTAC wants to join us, please do. Details:

Deep Humanities: A One-Day Symposium,

May 1, 10-4pm + 7pm Keynote

Rm 225, King Library, San Jose State University


10-12pm, Panel 1 moderated by Dr. Revathi Krishnaswamy. This moderated panel focuses on the integration of Humanities & Arts into the way artificial intelligence is conceived and contributes to “human-centered design,” and more based on the panelists’ interests


  • Dr. Daniel Susser (Philosophy, SJSU – privacy & data ethics)
  • Dr. Janet Stemwedel (Philosophy, SJSU – ethics in science)
  • Dr. Winncy Du (Director of the Robotics, Sensor, and Machine Intelligence Laboratory, SJSU)
  • Andrew Blanton (Digital Media Arts)
  • Neeti Mehta (VP, Brand Strategy & Initiatives, Automation Anywhere – Role of AI in Organisations and Ethics in Corporations)

Student Poster Session: 12-2pm, During the break between panels, this poster session takes place after the first panel & lasts for 2 hours with up to 50 students drawn from classes all over San Jose State University with posters focusing on the larger contemporary context of science, technology, artificial intelligence, and/or Silicon Valley. 

2-4pm, Panel 2 moderated by Dr. Katherine D. Harris. This moderated conversation focuses on building algorithms powered by emotional data, algorithmic bias, use of inclusive design methodology, and human-centered design from a multi-disciplinary point of view.


  • Dr. Anand Vaidya (Philosophy, SJSU – embodied cognition & robotics)
  • Dr. Claire Komives (Chemical Eng/Biomedical Ethics, SJSU)
  • Martin Ford (author of Rise of the Robots, TED Talk)
  • Rhonda Holberton (Digital Media Arts, SJSU — technology & bodies)
  • Manjula Menon, (fiction author; Physics, Electrical Engineering, Finance)

Keynote by Martin Ford 

7pm, Talk by Martin Ford, "Disruptive Technology Do Robots Want Your Job?" @ The Tech Museum of Innovation



Jacque! Thank you for collecting this info. I thought I'd let you know that we're going to be launching the Digital Research Ethics Collaboratory ( in the next little while as well. Right now our website is up & running but we're still building materials to post. It's a place to collect research stories from all participants in research communities. Definitely it's an exciting time as many people are doing this work and thinking through ethics, digital tech, research economies, etc.