Open Source e-Learning
  • Login

Functions

Kruse, Fabian [Fabian] - 16. Oct 2018, 13:52

Data Needs to be Tamed: Interview with Javiera Atenas

In her keynote at the International ILIAS Conference in Lucerne, Javiera Atenas (Open Education Working Group, London) painted a grim picture of data collectors and data traders in the area of education. We talked to her about data illiteracy, fair metrics and the right to forget.
For a data specialist, you’re surprisingly critical on the topic of tracking and storing educational data. Why is that?
I invite people to be cautious about why they are collecting data. Data is not the quintessential problem solver many technologists want us to believe. While it’s true that we live in a datafied society, the question is what the data is worth. As humans, we are not motors. We are not an engine. In consequence, it doesn’t make sense to measure us against performance. Datafying processes always means to simplify them and to put them into reductive categories. This doesn’t work well with people.

What’s more, in our datafied society we can observe an effect my sociologist friends call ‘poorology‘: Rich people are rarely scrutinized or punished. Rather, the data we collect are either from poor people or from people with migrant backgrounds - because they are perceived as a problem. This results in a twisted way of politicians using data to create a discourse around humans.
This becomes worse if you are not aware of the data that’s being collected.
Yes. You’re not aware because you are data-illiterate. And information-illiterate. It’s a kind of empowerment that you get if you can handle data. But if you don’t understand data, you become a subject of study. That’s not fair!

It’s also a question of participation. If we look at our political panorama, there’s a lot of intransparency when it comes to that. Our governments talk about participation and transparency, but how will you allow participation of people who are data-illiterate?
Where would you start to improve data and information literacy?
From early age! I’m not the person to suggest that schools need to address every single problem of society. But learning how to use data and information is not just a subject among others, it’s an essential part of civic education.

When you teach history, you look at demographic data, migration flows et cetera. Why did Germans move to Chile in the 1800s? What’s behind that? You look at the data. And you teach pupils to understand this data. The same can be said for biology, and data literacy is helpful even for subjects like religion and language training. If you ignore the data and don’t handle it in your classes, you will raise data-illiterate people.
Big data is a trend. Software companies, employers and even schools are tempted to collect everything they can get there hands on - and think about what to do with it later. Do we need more control about what is being collected in the first place?
Yes. For starters, we shouldn’t be looking into children’s family background at school. This is incredibly intrusive. Most parents are giving their best to provide good opportunities to their children. But if we’re reducing children to their economic background, we’re not considering this effort. This is damaging.

As white people coming from an educated background, we’re very lucky. But if you were born in a different situation, things change. To give you an example: The Chilean ID cards state where you were born. This doesn’t seem like a problem, but what if you were born in jail? This is an example of poorology - and if you don’t know anybody who was born in jail, you’re likely not even aware of this.
So which data should we be collecting?
It depends. Here’s an experiment that a friend of mine came up with: What if you created a portrait out of someone’s data? How would it look like? Would the person depicted be pleased with it? Would they take it home and display it to the public? Or would they hide it under their bed so that nobody can see it? You don’t want to look ugly in your portrait.
But there is a difference between having data collected that I’m not aware of or might object for good reasons, and having valid data collected that might not depict me in a positive way…
That’s true. The next question then is: Who will have access to that data? Let’s say you messed around a lot at school as a child. You were a bit of a joker, a clown. Maybe you got good grades, but this could still be turned into very negative data. And if this data is passed along, in secondary school they will know about it and put you into a certain category. This category might not be a fair way to portray you.

You should have the right to start again on a blank canvas! If you consider crimes, you might be giving a jail term. But after serving your time, the crime you commited will prescribe. So you can start again. Your record gets cleaned up, and you’re ready to go. Currently, this does not happen with data. You cannot clean up your data record.

Imagine that after ending a relationship, your ex-partner would rate you and provide that data to a prospective new partner. It’s ridiculous. But this is exactly what they are doing with our data for potential employers.
So you’re demanding a right to forget?
We already have this for Google. Why don’t we have it for social data that is being collected by schools or governments?
Let’s have a look at the e-learning world. There is a trend in the commercial LMS world to collect as many user interactions as possible…
And this is the keyword: Interaction. I interact with Netflix a lot. Do I learn something from it?
Well, you certainly could!
No. I just browse things. It’s meaningless.
So do you demand better metrics in education or do you demand less metrics?
Fair metrics. This means that if you start university, you have the right to ask for a blank canvas. No school data should be used.
But then there is software like Turnitin. This program supposedly evaluates a student’s writing style and flags any apparent progress that seems „unnatural“. If your university decides to put this software to use, you would need a blank canvas again after a year or so. And then again. And again. So what’s the data that can be kept?
This is certainly up for debate. But whatever data your institution collects, it needs to be deleted after you finish university. Because if they don’t, you never know who is going to reuse it. After you’ve finished university, you leave with a certificate and a bunch of grades, and that’s it. No more data required. I certainly would not want Tunitin to use any of my information and sell it to potential employers.
Some users are interested in getting better learning analytics in ILIAS. What is your suggestion to prevent abuse of data early in development?
Have a data governance committee that looks at the requirements of the client. It’s obvious that we all need to earn money and there are certainly different requirements in terms of data collection. But bring your data governance to the customer. Try to build a specific position for data control and data management, to ensure that if someone leaves their job, their information is not transferred to potential new employers.

This should be true even when that person has been sacked. If there wasn’t any criminal misconduct involved, nobody outside the old company should be informed. Look at it this way: If I got sacked because of an argument with my boss and he keeps telling his version of the story publicly, I can sue him for spreading bad words about me. But if my boss shares data that I’m not aware of, he can use this data to portray me in a very negative way - painting only part of the whole picture. This could affect my future employment and earning potentials.

Occupational data is already being used by corporations in the US to grant employment to people. And black minorities, muslims and women tend to be punished because they don’t perform so well according to this data. Why is that? If a woman gets pregant or needs to rush home to take care of her baby, maybe she simply does not ‘perform‘ well?

Of course a company wants to collect data. But the employees should be aware of it. There must also be consequences for mismanagement of this data. Employees should be able to sue their employers if this happens.

We should be aware that data can go wild. It’s not a bad beast, but it needs to be tamed!
Isn’t this an ideological debate?
Yes it is: Neo-liberalism against humanism.
So if humanism is in defense right now, you try to create awareness for what is happening?
Create awareness, and develop boundaries. I wouldn’t want my employer to monitor my data after 5pm. I wouldn’t want my university to develop a student card that tracks my every movement. There’s a boundary. Don’t cross it!

In the end, numbers can be tortured until they give you the results you want. That’s why ethics and philosophy are so important. And if we don’t teach people to manage and assess data, they will end up being manipulated.
Javiera, thank you very much for this interview!

Functions

No comment has been posted yet.