Precarity and ChatGPT
While AI algorithms can automate many tasks, people are still necessary to oversee and improve these systems.
I would like to touch upon how much human labor is involved with the creation of AI/ML (Machine Learning.) The focus would be with Precarity with any structures surrounding the creations of AI/ML. With some sub topics on Power structures(FAANG companies) it can bring. Plus if there are any creation or implementation of Care (Ethics in AI.)
I understand Precarity as: Precarity refers to a state of being insecure, uncertain, or at risk of harm or disadvantage
I currently believe that AI/ML is in an infant stage of its development and extremely insecure and can be at a risk of further disrupting and influencing humanity/society to a unknown area due to our capitalistic nature.
Examples of human labor in AI/ML are:
Data Collection and Labeling: AI systems rely on large amounts of data to learn and make predictions.
Who are the people that review this? what are their roles? Do they have proper precaution when working?
Since this is the initial stage of the process, i believe this is important to visualize.
Algorithm Development/Model Training and Validation: AI algorithms are typically created by human developers who write code.
System implementation and Deployment: Human engineers and technicians responsible for implementing and deploying AI systems in real-world settings.
Monitoring and Maintenance: After the AI system is deployed, people are responsible for monitoring its performance and making adjustments.
Who are these developers? How much bias would we get? do we have any demographic on their background? once an AI system is in place, is there an ethics personnel?
These are questions that i will start with
I would like to discuss this in an essay format and in addition, i would like to present the demographics in a Tableau presentation.