Wednesday, Aug 7: 10:30 AM - 12:20 PM
6077
Contributed Posters
Oregon Convention Center
Room: CC-Hall CD
Main Sponsor
Section on Text Analysis
Co Sponsors
Section on Text Analysis
Presentations
Due to recent advancements in generative AI, there is an apprehension in the entertainment industry, regarding the use of AI in place of human writers. Despite the potential uses for AI in aiding writers- making their work more efficient, and fostering new ideas, many fear AI as a threat to their livelihoods. The lack of clarity regarding the use of AI in creative processes negatively impacts writers who are having their careers threatened. A large cause of this issue is the lack of awareness regarding the extent to which AI can, or should, be used in creative works. If AI is used to aid, rather than replace, writers it could easily be a boon, rather than a threat. A study which evaluates the potential uses of currently existing AI writing assistants, would aid in understanding AI's current capacity for use in creative spaces. The study was conducted through the comparison of multiple, currently popular, AI writing assistants. A set of prompts was devised and given to the AI, responses were then evaluated via a rubric. This research gauges where the strengths and weaknesses of AI writing assistants currently lie, to gain a better understanding of their practical use cases.
Keywords
Artificial Intelligence
Generative AI
Creative Writing Assistants
Performance Criteria
Semantic Analysis
AI Content Generation
Abstracts
Mental health challenges, including depression are closely linked with the potential for developing suicidal ideation. Detecting these ideations early is crucial for effective treatment. With use of artificial intelligence (AI) we can contribute to early detection of suicidal ideation and improve personalized mental health. We explore the use of annotated mental health discussions from Reddit to develop a tailored model called PsychBert for identifying mental health disorders. The model's efficacy was evaluated and compared to OpenAI's GPT-3.5 using Zero-shot classification, showing superior performance in identifying different mental disorders. The study integrated retrieval-augmented generation (RAG) for enhanced diagnostic recommendations and utilized the Gemini-Pro Model for customized diagnostic reports. The custom-developed PsychBert model outperformed OpenAI's GPT-3.5, achieving higher AUC scores. Using the AWS platform, the approach introduces a scalable foundation for enhancing mental health services. Future efforts will focus on incorporating Electronic Health Record (EHR) data to address health disparities and explore generative AI to transform mental health.
Keywords
Mental Health
Generative AI
Large Language Models (LLMs)
RAG
Abstracts
We consider the problem setting in which we have two sets of texts in digital form and would like to quantify our beliefs that the two sets of texts were written by the same author versus by two different authors. Motivated by problems in digital forensics, the sets of texts could be composed primarily of short-form messages, and texts by the same author may be about vastly different topics. To this end, we focus on user-specific stylometric aspects of the texts that are consistent across an author's writings and are invariant to topics. Recent work in machine learning has sought to learn a mapping from input texts to output a vector representation intended to capture such stylometric features. In this work, we investigate the use of such stylometric text embeddings to construct a score-based likelihood ratio (SLR), an increasingly popular way of quantifying evidence in forensics. We present the results of SLR experiments using recently proposed stylometric embeddings from machine learning applied to real-world datasets relevant to digital forensics.
Keywords
digital forensics
authorship analysis
large language models
machine learning
idiolect
text data
Abstracts