The Wall Street Times

Ethical AI in Practice for Database Administrators and Technology Professionals

Ethical AI in Practice for Database Administrators and Technology Professionals
Photo Courtesy: Claude

By: Evans Ighodalo, Senior Database Administrator

Why Transparency in AI Use Is an Ethical Imperative

Artificial Intelligence is reshaping how technology professionals work, yet many database administrators (DBAs) and tech workers quietly use AI tools while downplaying their involvement, fearing that disclosure will make them appear less skilled. This “AI shame” is counterproductive and ethically problematic. According to McKinsey & Company, more than half of all organizations had adopted at least one AI function by 2023, with adoption continuing to accelerate across every major sector (McKinsey & Company, 2023). Despite this, surveys show that many workers feel uncomfortable disclosing AI assistance, particularly when they fear it may diminish the perceived value of their expertise (Accenture, 2023).

This article argues that ethical AI use is defined by transparency, intellectual ownership, and human judgment, not concealment. It also shows that the most powerful AI outputs belong to professionals who bring domain expertise, creative prompting, and critical thinking to every interaction, and it traces those principles into the healthcare and pharmaceutical sectors where the stakes are highest.

Why Transparency Is a Professional Obligation

When a DBA uses AI to generate a complex SQL query or diagnose a performance bottleneck, their intellectual contribution does not disappear. It transforms. The DBA’s role shifts to expert direction, evaluation, and verification: higher-order cognitive tasks that require deep domain knowledge. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems states that professionals using AI have an obligation to maintain transparency about its role in their work, particularly when outputs affect other people or organizational systems (IEEE, 2019). The World Economic Forum’s AI Governance Alliance similarly identifies transparency and meaningful human oversight as foundational principles for responsible AI deployment (World Economic Forum, 2023).

Concealing AI use, when it materially influences work product, can erode trust and misrepresent competencies. A landmark MIT study found that ChatGPT substantially raised average productivity in professional writing tasks, with completion times decreasing by 40% and output quality rising by 18%, while also compressing the gap between higher- and lower-performing workers (Noy & Zhang, 2023). Transparency about this collaborative process is not an admission of inadequacy; it is a demonstration of sophisticated, modern professional practice.

Critical Thinking, Prompting, and Creativity

Prompt engineering, the practice of designing precise, context-rich inputs to elicit optimal AI outputs, is one of the most valuable new skills in the modern technology workforce. A vague prompt produces a vague result. A prompt crafted by a skilled DBA that specifies the database engine, includes schema context, defines constraints, and anticipates edge cases produces a result of far greater utility. Research published by Harvard Business Review Press found that professionals who excelled at working with AI treated it as a collaborative thinking process rather than a command-and-retrieve tool, refining and iterating through active intellectual engagement (Daugherty & Wilson, 2018).

Two professionals with identical AI tools will produce vastly different outcomes. The differentiator is human creativity, framing problems insightfully, synthesizing information across domains, and imagining possibilities that no pattern-matching system can generate alone. As Brynjolfsson and McAfee observe, the highest-value human contributions increasingly involve “creative recombination,” connecting ideas across domains in novel ways (Brynjolfsson & McAfee, 2014). AI amplifies creative potential; it does not replace it.

Critical thinking is equally indispensable as a quality filter. AI systems can produce outputs that are confident, fluent, and wrong. Research on human-algorithm interaction has shown that people tend to over-rely on algorithmic recommendations, accepting outputs without sufficient scrutiny, particularly when they lack domain expertise (Logg, Minson & Moore, 2019). The professional who owns the review process owns the outcome.

AI in Healthcare and Pharmaceuticals

Clinical Decision Support

A landmark study in Nature Medicine demonstrated that an AI system analyzing mammography data detected breast cancer at an accuracy that surpassed human expert readers while simultaneously reducing false positive and false negative rates (McKinney et al., 2020). AI-based physiological monitoring tools, such as scoring systems that integrate vital sign data from intensive care patients, have demonstrated the ability to predict severe illness earlier and more accurately than traditional clinical assessment methods (Saria et al., 2010). These outcomes are only possible when clinical teams engage critically with AI alerts, neither dismissing them reflexively nor acting without judgment.

Drug Discovery

The average cost of bringing a new drug to market exceeds $2.6 billion, with timelines stretching 10 to 15 years (DiMasi, Grabowski & Hansen, 2016). AI is disrupting this calculus. DeepMind’s AlphaFold2 predicts protein structures with remarkable accuracy, enabling researchers to model molecular interactions that previously required years of laboratory work, and its database now covers over 200 million proteins (Jumper et al., 2021). Insilico Medicine used an AI-driven platform to identify a novel drug candidate for idiopathic pulmonary fibrosis in just 18 months and advanced it to Phase 2 clinical trials (Ren et al., 2024). Neither achievement was the product of autonomous AI; both required expert human teams to frame problems, evaluate candidates, and steer the process at every stage.

Healthcare DBAs and Regulatory Transparency

DBAs in healthcare organizations manage large, complex data environments while carrying the regulatory obligations of protected health information (PHI). Data from the HHS Office for Civil Rights indicates that tens of millions of patient records have been exposed through breaches in recent years (HHS Office for Civil Rights, 2021), underscoring the urgency of AI-powered anomaly detection for identifying unauthorized access. The FDA’s framework for AI/ML-based software as a medical device explicitly requires documentation of AI’s role in clinical decision-making, audit trails, and ongoing monitoring of AI performance (FDA, 2021), formalizing at the institutional level the same transparency principles that individual practitioners should embrace.

A Practical Framework for Ethical AI Use

  • Disclose, Don’t Conceal: Proactively communicate when AI has contributed to your work. Frame it as modern professional practice, because it is.

  • Own the Output: Accept full responsibility for every AI-assisted deliverable. Ownership requires understanding, which requires critical evaluation.

  • Invest in Prompt Quality: The context, constraints, and domain expertise you bring to an AI interaction directly determine the value you receive.

  • Verify Before Deploying: Establish review protocols for AI outputs. Never deploy AI-generated code or queries to production without structured human validation.

  • Document the Human Contribution: Capture not only what was produced but how human judgment shaped and validated the AI’s role.

  • Stay Current on Ethical Guidance: Follow evolving standards from the IEEE, ACM, and sector-specific bodies such as the FDA.

The Professionals Who Will Thrive

The professionals who will thrive are not those who use AI most secretly or most passively, but those who engage with it most thoughtfully. They bring domain expertise to every prompt, critical judgment to every output, and professional honesty to every disclosure. In doing so, they demonstrate not that AI is doing their work, but that their work has evolved to encompass a powerful new class of tools.

In healthcare and pharmaceuticals, this distinction is life-saving. The clinician who critically evaluates an AI diagnostic alert, the pharmacologist who directs an AI drug discovery platform with domain expertise, and the healthcare DBA who validates AI-suggested optimizations before they touch clinical systems; these professionals are not diminished by AI. They are amplified by it. The same is true for every technology professional who chooses transparency over concealment, and engagement over passive acceptance.

References

Accenture. (2023). Technology Vision 2023: When atoms meet bits. https://www.accenture.com/us-en/insights/technology/technology-trends-2023

Brynjolfsson, E., & McAfee, A. (2014). The second machine age. W. W. Norton & Company.

Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business Review Press.

DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry. Journal of Health Economics, 47, 20–33. https://doi.org/10.1016/j.jhealeco.2016.01.012

Food and Drug Administration (FDA). (2021). AI/ML-based software as a medical device action plan. https://www.fda.gov/media/145022/download

IEEE. (2019). Ethically aligned design (1st ed.). IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. https://ethicsinaction.ieee.org

Jumper, J., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. https://doi.org/10.1038/s41586-021-03819-2

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005

McKinney, S. M., et al. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6

McKinsey & Company. (2023). The state of AI in 2023. McKinsey Global Institute. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year

Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative AI. Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586

Ren, F., et al. (2024). A small-molecule TNIK inhibitor targets fibrosis in preclinical and clinical models. Nature Biotechnology. https://doi.org/10.1038/s41587-024-02143-0

Saria, S., Rajani, A. K., Gould, J., Koller, D., & Penn, A. A. (2010). Integration of early physiological responses predicts later illness severity in preterm infants. Science Translational Medicine, 2(48), 48ra65. https://doi.org/10.1126/scitranslmed.3001304

U.S. Department of Health and Human Services, Office for Civil Rights. (2021). Annual report to Congress on HIPAA compliance.

World Economic Forum. (2023). AI governance alliance: Briefing paper series. https://www.weforum.org/publications/ai-governance-alliance-briefing-paper-series/

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of The Wall Street Times.

More from The Wall Street Times