We Count: Artificial Intelligence Inclusion Projects from Inclusive Design Research Centre

Resources

Support your learning through our searchable research library and discover valuable resources about many topics in artificial intelligence and data analytics, such as AI ethics, bias and data tools.

Select the We Count at Large tag to view a selection of speaking engagements and presentations by IDRC team members. Many of these resources showcase the efforts of IDRC Director Jutta Treviranus, whose pioneering work and insights in AI and inclusive AI continue to inspire and lead the field.

Filters

Topics

  • AI and disability, small minorities and outliers (for the general public)
  • Work for people with disabilities in data science
  • AI ethics and policy
  • AI design and methods (for AI experts)
  • ICT Standards and Legislation

Tags

Media Types

Researchers Built an “AI Scientist” — What Can It Do?

Source: Nature
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

AI Scientist is one of the latest AI tools to have automated parts of the scientific process. It currently has a limited range of application, but additional developments are expected with the tool.

Researchers Examine Teens’ Use of Generative AI, Safety Concerns

Source: Tech Xplore
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

A new study by University of Illinois Urbana-Champaign researchers has found that parents have little understanding of generative AI, how their children use it and its potential risks. They also found that GAI platforms don't offer enough protection to ensure children's safety.

Researchers Investigate Whether "Fairness Constraints" Mitigate Bias in Algorithms

Source: VentureBeat
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

An article that discusses the results of a new paper on the effectiveness of fairness constraints for alleviating bias against minority groups.

Researchers Reduce Bias in AI Models While Preserving or Improving Accuracy

Source: MIT News
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

MIT researchers have developed a new technique that removes specific data points in a dataset in order to balance it and reduce bias in the dataset.

Researchers Warn of Unchecked Toxicity in AI Language Models

Source: CTV News
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

Researchers from MIT's Improbable AI Lab and the MIT-IBM Watson AI Lab are developing a “red-team language model” that is designed to generate problematic prompts that trigger undesirable responses from tested chatbots.

Researchers Warn of Unchecked Toxicity in AI Language Models

Source: CTV News
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

Researchers from MIT's Improbable AI Lab and the MIT-IBM Watson AI Lab are developing a “red-team language model” that is designed to generate problematic prompts that trigger undesirable responses from tested chatbots.

Research Suggests Using AI to Reduce Bias in Recruitment Is Counter Productive

Source: Stevens & Bolton
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

The use of AI in recruiting is ineffective because the technological effects are shaped by insignificant variables such as facial expressions, candidate attire, background lighting and so on.

Response to Office of the Privacy Commissioner of Canada Consultation Proposals Pertaining to Amendments to PIPEDA Relative to Artificial Intelligence

Source: MAIEI
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

The Montreal AI Ethics Institute's comments and recommendations following their invitation by the Office of the Privacy Commissioner of Canada (OPCC) to provide feedback at a closed roundtable and on the OPCC consultation proposal for AI-related amendments to Canadian privacy legislation.

Responsible AI Has a Burnout Problem

Source: MIT Technology Review
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

Companies are under increasing pressure from regulators and activists to ensure that their AI products are developed in a way that mitigates potential harms. But responsible AI teams often lack support, which can leave people in these teams feeling undervalued, affecting their mental health and leading to burnout.

Responsible AI in Education: The Case for Open AI

Source: AI in Education
Media Type: Website Article
Readability: 
  • Expert
Summary:

OpenStax's Richard Baraniuk has contributed this article to our FLOE project's AI in Education collection, supported by the Hewlett Foundation in partnership with Etika Insights. Richard's article advocates for designing AI tools to support teachers rather than replace them, enabling educators to focus on mentorship while technology handles routine tasks. He argues that building AI on open, equitable foundations is essential to prevent amplifying existing inequalities. The AI in Education collection examines AI’s role in education, highlighting both opportunities and risks while emphasizing the importance of responsible implementation.

Resources

Support your learning through our searchable research library and discover valuable resources about many topics in artificial intelligence and data analytics, such as AI ethics, bias and data tools.

Select the We Count at Large tag to view a selection of speaking engagements and presentations by IDRC team members. Many of these resources showcase the efforts of IDRC Director Jutta Treviranus, whose pioneering work and insights in AI and inclusive AI continue to inspire and lead the field.

Filters

Topics

  • {{ category.categoryLabel }}

Tags

Media Types

{{ searchResult }}

Search Term:

“{{ searchTerm }}”