Return to Video

Utility & AI & Grant Writing

  • Not Synced
    I will be applying utility to AI when it’s used for grant writing specifically.
    First, I will explain what AI does and its ethical complication. Next, I will explain what grant writing is. Then, I will dive into utility and the hedonistic calculus, looking specifically at fecundity, extent, and intensity. I will briefly share an example of an AI generated response to a grant prompt similar to ones I’ve been working on currently at work. Finally, I will offer some concluding thoughts for how to engage ethically with AI when writing grants.
    I write grants at my job, and I’ve been increasingly interested in AI in general. AI is used for basically anything writing related, so applying it to grant writing seemed obvious. I’d love to use AI to help save time and write successful grants.
    Artificial intelligence uses machine learning to learn and develop. AI gets better as it gets more input and finds more patterns. AI is able to do things we can’t, like quickly identify a problem in an obscure equation (Hillen, 2024). AI generates original content, which is what sets it apart from online searches. AI is great for brainstorming ideas and finding fast answers. AI is a huge time saver for white collar jobs.
    The presupposed ethical values of AI are an emphasis on speed and convenience, getting a lot of information quickly, and learning by experience. I think it’s interesting to consider conformity because while it appears that AI can generate its own content, I criticize whether it can ever create something truly innovative since it can only reflect back a continuation of patterns it already sees. Also, several people think AI cannot learn and improve, yet it can, and I also wonder how this connects to its ability to generate seemingly original material.
    Bankins and Formosa (2021) write, “AI’s impact on individuals’ autonomy is a key issue for the ethical AI literature.” If tasks are taken over by AI that humans want to do, then we are losing autonomy. This is perhaps key to why some may have concerns about AI being used for grant writing, which is typically persuasive and heartfelt or passionate writing.
    AI cannot estimate the uncertainty or truth in what it generates. Risks of unemployment are especially concerning to those who work in jobs that AI will soon be able to do alone without much supervision. AI takes away certain skills from humans and this changes the value others place on those skills. For example, what will happen if AI can proofread a document on its own with human review becoming unnecessary? Proofreaders and their skills will become less valuable in society, and just think of all the implications when this is true for many jobs. AI use can foster negative self-image for those working in those “less valued” jobs. AI tracks what we do and is always watching and learning from our continued input; this scares us for obvious reasons. AI will disproportionately affect the workforce with blue collar jobs receiving much less benefit from AI than white collar workers will. Of course, AI presents issues related to plagiarism and misinformation as well.
    Those who use AI need to learn to use the most appropriate search prompts, just like anyone using Google. In my experience, AI, however, seems to do well with full sentences rather than just search terms. Users also need to know that AI appears intelligent and accurate, but it may be biased or straight-up incorrect. Additionally, users should be aware that the content they put into AI is being monitored and tracked, like any other digital footprint. It’s not yet know to what extent our personal content could be used by AI elsewhere in generating responses. Could grant content we write and input into AI be later used by AI in its response to other grant writers’ inquiries? Could our competitors gain access to our ideas?
    To my knowledge, grants are often written by non-profit organizations, humanitarian organizations, scientists, and even businesses and individuals can write grants. Essentially, grants are written by people who are making the world better by helping improve quality of life, save life, or prolong life.
    Utilitarianism is about trying to find what offers the greatest good for all involved. By analyzing three elements of the Hedonistic Calculus, extent, fecundity, and intensity, I will highlight some of the ethical complications of AI continuing to be used for writing grants. In my experience, most people are hesitant about the negative possibilities AI brings, but I find that with more knowledge, I myself am more inclined to see the benefits. Therefore, using utility to analyze AI used for a specific purpose (grant writing) takes away the feeling behind the ethical choice and focuses strictly on the logic of the choice. Because so many people tend to have strong opinions about AI, I think a utility analysis will shed some thought-provoking light on the subject. Utility is better to apply to AI than other theories because other theories lend themselves more to following emotional responses.
    Krening (2023) writes, “...machine learning agents are really good at performing complex calculations quickly; however, they are not yet proficient at assigning moral value to possible consequences.” AI has no moral conscious, and I think that is why so many are resistant to trusting its evaluations (e.g. medically). What does this mean for grant writing, a practice that relies on the connection that can be made between the project in the grant and those reading the grants? It means that there are likely people who distrust AI to write compelling content in ethical ways that ultimately brings in funding. A truly utilitarian perspective would disregard the feelings and seek to know whether AI can bring in additional funding or not.
    Grants offer support in ways that cannot be explained conclusively because one child helped causes a chain reaction of possibilities. However, taking it for face value, if AI helps grant writers to get more funding and faster, it can expedite the speed at which needs are met. Scientists will be able to spend more time researching solutions to diseases, humanitarian organizations will be able to bring knowledge and nourishment faster to save lives, and non-profits will be able to help their constituents sooner. Because we can help people sooner, we’ll be able to help more people in a shorter amount of time. AI has the potential to help an infinite amount of people because of the chain reaction of goodness. On the flip side, if AI proves to not be as helpful or effective in grant writing, it will waste valuable time of the grant writer, and it may also cause less funding to come to the organization. Not to mention the loss of money spent on the grant writer, who is wasting their time on the clock. However, AI is predicted to be more beneficial than not, even if only used as a brainstorming or proofreading tool.
    AI is not going away. People are already using AI to help write grants (Seckel, 2024). It’s guaranteed that AI will continue to be used in the grant writing process. What is less certain is whether it will be used by grant reviewers. How AI is used in the grant process will certainly change over time as well, as AI improves and as users understand better how to navigate it. The learning and adapting can only be positive, but along the way we could be leaving a trail of pain instead of pleasure. All learning processes do this to some extent. AI will likely improve exponentially because the more we use AI the stronger and better and faster it will become. It has the potential to track which grants are awarded and which are not, and it could potentially offer users better advice once it begins learning patterns. It is likely that AI will be able to track all kinds of grants and which get awarded at specific organizations, which means grants can become even more tailored. For example, maybe using certain words and phrases in the grant application will help it stand out to those reviewing and awarding the grant, while other words and phrases used in another application elsewhere will make it stand out there too. On that note, we should consider the possibility that grant writing may become more automated and yet personal in a strange paradox. Will the entire grant writing and awarding process become more transactional than rational? Or on the other hand, will it become more important to have built a human connection with the individuals who work for the organization offering the grant? Some of my past research indicates that the personal relationships cannot be overstressed when working in marketing and grant writing. People like to give money to those they trust.
    A lot of organizations rely on funding in order to continue their mission, and that extends to those who rely on the organizations for sometimes life-saving help. Funds from grants typically directly impact constituents in extremely meaningful ways. More funding means more pleasure, including more lives saved, educated, or supported. Less funding means greater suffering, including death, lack of education, and lack of support. The gap that AI might create between those writing grants using AI versus those reviewing the grants without AI assistance could complicate the process and put limits on grants being awarded. Likewise, there could be a disconnect between similar organizations who are vying for the same grant. For example, when Organization 1 understands and uses AI successfully while Organization 2 has limited resources and uneducated or resistant employees, the gap widens. That divide could cause an imbalance between where funding is going in society. What could that mean for the target populations served by the organizations?
    This is an example of a grant question and AI’s response. In this example here (which you’re welcome to pause the video to read over), you can see that the response is perhaps not as specific as I would have written myself, knowing my specific organization and the purposes of our Community Outreach Program. Basically, I have a more holistic view of what is needed in the response. This prompt is directly related to a grant I’ve been writing for my organization, and I find AI’s response persuasive, but differs from what I’ve chosen to focus on in my responses because I am able to connect other elements in the grant together and talk about other long-term goals we have with the program, etc. I truly believe that if I were to spend more time asking AI more specific questions and using it to brainstorm ideas, it would help me as I craft a truly impactful response to grant questions like this one.
    With a utilitarian perspective, we see that grant writing with AI is a more complex issue. The ultimate thing to consider with AI being used for grant writing is the impact the growing divide (of those using AI effectively versus those not using it at all or ineffectively) is affecting numerous people and placing limits on their access to help and support. Those who understand and practice using AI will become masters of their craft faster, and those organizations will be able to bring greater relief and pleasure to their constituents, and that impact will be far reaching, even into several generations to come. Those who fall behind (misusing or misunderstanding how to effectively utilize AI with grant writing) will not be able to carry out their mission and help their constituents, and likewise the suffering caused by this could affect infinite people. Grant writers should seek to understand AI and its current use as a brainstorming tool rather than a “does it all” type of tool and instead focus on building connections with the companies/organizations who offer the grants.
Title:
Utility & AI & Grant Writing
Description:

more » « less
Video Language:
English
Duration:
14:01

English subtitles

Incomplete

Revisions