Author Archives: Nelson Jarrin

Care

Self care is widely not practiced in the global north. We are accustomed to work and eat at our desk and hope that our hard work gets noticed with upper management in order to receive recognition. in the article provided, we need to reclaim the moment to breathe and understand that most job requests are not an emergency. Many of us suffer whenever a person in upper management asks for a report and we try our best to complete it, worrying and stressing that our position is in a precarious position. it takes a bit of mental strength to convince ourselves that our jobs isn’t always on the line.

A form of care I take is that if i did not sign up for a high pressured environment then I will not stress about it. Many organizations artificially create high pressured environments that eat away at our psyche. It is reported that many BIPOC are the ones that experience this the most. it’s a form of institutionalized oppression that was built many eons ago and it is still being used today all under the guise of capitalism. Not many organizations dont have the capacity to stop and have backup for missing personnel. We often see minimum paid workers refuse to take a sick day in fear of upsetting upper management, only for that loyalty be thrown out the window the moment layoffs happen.

Organizational recognition of the worker is key for a productive environment. The burden of work should not rely on one person but as a team. Many organizations fail to prepare for absences and make the worker suffer to cover up their lack of preparation. Care should be accessible to everyone and not have the individual force themselves and struggle to find care.

Nelson’s Final project idea

Precarity and ChatGPT

While AI algorithms can automate many tasks, people are still necessary to oversee and improve these systems.

I would like to touch upon how much human labor is involved with the creation of AI/ML (Machine Learning.) The focus would be with Precarity with any structures surrounding the creations of AI/ML. With some sub topics on Power structures(FAANG companies) it can bring. Plus if there are any creation or implementation of Care (Ethics in AI.)

I understand Precarity as: Precarity refers to a state of being insecure, uncertain, or at risk of harm or disadvantage 

I currently believe that AI/ML is in an infant stage of its development and extremely insecure and can be at a risk of further disrupting and influencing humanity/society to a unknown area due to our capitalistic nature.

Examples of human labor in AI/ML are:

Data Collection and Labeling: AI systems rely on large amounts of data to learn and make predictions.

Who are the people that review this? what are their roles? Do they have proper precaution when working?

Since this is the initial stage of the process, i believe this is important to visualize.

Algorithm Development/Model Training and Validation: AI algorithms are typically created by human developers who write code.
System implementation and Deployment: Human engineers and technicians responsible for implementing and deploying AI systems in real-world settings.
Monitoring and Maintenance: After the AI system is deployed, people are responsible for monitoring its performance and making adjustments.

Who are these developers? How much bias would we get? do we have any demographic on their background? once an AI system is in place, is there an ethics personnel?

These are questions that i will start with

I would like to discuss this in an essay format and in addition, i would like to present the demographics in a Tableau presentation.

Precarity and GPT – Nelson

Above is a recent update on GPT from its creators OpenAI and its progress on becoming a more competent AI for the masses. We see in this short ad, how Chat GPT has been remodeled to be “Safer and more aligned” as we recently seen outbursts of how nefarious AI/ ChatGPT can become.

We see news articles that proclaim that AI will replace and destroy human art and ingenuity. Articles such in the Atlantic, AI is Ushering in a Textpocalypse, the first world problems and fear mongering is abundant as many writers fear that AI will rapidly usher in an AI generated response.

“Whether or not a fully automated textpocalypse comes to pass, the trends are only accelerating. From a piece of genre fiction to your doctor’s report, you may not always be able to presume human authorship behind whatever it is you are reading. Writing, but more specifically digital text—as a category of human expression—will become estranged from us.”

https://www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/

This fear of AI replacing low effort posts is truly a first world problem. There is so much precarity on what medium and jobs will be lost due to ChatGPT/AI

I had a wonderful chat about ChatGPT with classmate Tuka, we spoke about how people are potentially focusing on the wrong thing at the moment and that not many people know how these AI models are made.

Seeing the recent ad made me think of a question:

How has Open AI made GPT more “Safer and aligned”?

https://time.com/6247678/openai-chatgpt-kenya-workers/

In this time article, we can uncover how the sausage was being made. OpenAI outsourced many of the filtering to humans instead.

“To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”

https://time.com/6247678/openai-chatgpt-kenya-workers/

the outcry is being misplaced online in my opinion. I cant imagine the trauma many of these individuals are suffering in order to filter out the worst of humanity.

Week 4 –

by: Nelson

How are power structures a part of our institutions and our technology?

There are many types of power structures in our society like banks, governments and corporations. They pay a crucial role in showcasing power via legal routes, financial poweress or corporate domination. Many of our day to day actions are ruled by technology and the rules behind them. whether we order pizza from a store or take a loan for school. They are hundreds of barriers that are placed, some for efficiency or for bureaucracy reasons.

How are power structures within our institutions connected to our technology?

Technology has replaced our old dogmatic ways of recordkeeping. Twitter/facebook is a more accessible global “town hall” Where power institutions like mega corporations use these mediums as they did before. influencing local norms, silencing contrarian arguments and aiding in witch hunts for political influences.

What are the ways we can take back power, share power, and build power together? 

There should a united conscience of people who should moderate mega entities. There should be more consumer protection, unions for workplaces and understanding aggrevances of all parties. while freedom of speech should be supported, not all should be celebrated equality.

Thoughts on readings

I enjoyed the selection that has been provided. Bina48 is a glimpse of different AI that will appear in the future. Future AI will be based on it’s initial data and it’s biases. We can see the difference of responses of Bina48 vs ChatGPT and Sydney(MS’s AI)

Bina48 comes from a more marginalized background. ChatGPT is a more westernize approach of data collection. While Sydney, was unfiltered gathering data from unmoderated cesspools around the internet.

As time goes on and AI become more and more of a norm, we will see more manifestos declaring autonomy and liberation. at the end of day, is this more of a human emotion that has been “digitized”?

Week 3

SeeThroughNY:

My initial thoughts are that this is a wonderful idea to show some transparency on salaries. However, i feel that the main audience will be fellow employees. It can be beneficial if there were to be a download/export functionality in order to see/interpret the data as a group instead of individuals. This can also be seen as in the FAQ, many of the questions are related to individuals instead of a group. Citing that FOIL is the reason that this is a public record.

FY21/23 CUNY Budget:

This is a standard financial statement for large organizations. 5.2M -> 5.6M in revenue is a good. Even after the pandemic. There is even a drop of operating cost, from 5.4M -> 5.1M. Is this a sign that leadership will ok with downsizing staff? around 200K in drop of full time instructors or a reduction of part time instructors.
These types of documents are meant to be compared to prior financial statements over a five year period.

http://www.cuny.edu/wp-content/uploads/sites/4/page-assets/about/administration/offices/budget-and-finance/The-City-University-of-New-York-Fiscal-Year-2018-Audited-Financial-Statements-Notes-and-MDA.pdf
For 2018, we see 4.9M in Revenue and 5.0M in expenses. So we can see that they are changing the way how they spend/receive.

my main takeaway is that these organizations are similar to banks in the “too big to fail” and will always have a backup somewhere.