Custom AI for Ukrainian–English Literary Translation

Project Overview

This project develops a custom-designed artificial intelligence model for Ukrainian–English literary translation, with a focus on preserving semantic nuance, stylistic features, and cultural specificity. The system has reached proof-of-concept stage and has been applied in academic workshop settings to test methodological workflows, evaluation criteria, and human–AI collaboration models.

Project lead and system designer: Huseyin Oylupinar (Institute for Knowledge, Research, and Society)

Status: In development
Research Theme: AI, Knowledge Systems, and Research Methodology
Secondary Theme: Education, Knowledge Transfer, and Public Engagement
Focus Region: Ukraine / transnational literary exchange


Problem and Significance

Automated translation systems perform unevenly when applied to literary texts, particularly those involving metaphor, historical references, and culturally embedded language. For Ukrainian literature, this problem is amplified by limited high-quality parallel corpora and by the political and historical contexts shaping language use.

This project addresses the methodological gap between generic machine translation tools and the requirements of literary translation by developing a custom AI system explicitly designed for research, educational, and interpretive use rather than mass deployment.


Research Questions

  • How can custom-trained suggestive AI systems support literary translation without flattening stylistic or cultural nuance?
  • What types of errors and distortions are most common in Ukrainian–English literary machine translation?
  • How can human–AI collaboration be structured to enhance, rather than replace, expert literary judgment?

Sources and Materials

  • Curated Ukrainian literary texts
  • Parallel Ukrainian–English translations where available
  • Expert-produced reference translations
  • Annotated examples highlighting metaphor, tone, and culturally specific language

All materials are selected and structured manually before computational processing.


Methods and Approach

Human-led research
The project is grounded in literary analysis, close reading, and comparative translation studies. Human translators and researchers define evaluation criteria, identify problematic constructions, and assess outputs qualitatively.

Custom AI systems (methodological infrastructure)
Rather than deploying off-the-shelf translation models, the project develops custom AI workflows designed for task-specific refinement, controlled suggestion generation, and transparent evaluation against human benchmarks. AI outputs are treated as analytical aids and draft suggestions, not as authoritative translations.

The project’s methodological framework has been tested through an academic workshop on AI-assisted translation, where participants evaluated AI-generated translations across literary, nonfiction, and scholarly texts. Workshop exercises focused on comparative tool analysis, structured post-editing, error identification, and ethical assessment. Insights from this setting inform ongoing refinement of model behavior, evaluation metrics, and human oversight protocols.


Ethics, Integrity, and Safeguards

  • No automation of final translation decisions
  • Clear separation between human authorship and AI-assisted suggestions
  • Explicit documentation of model limitations
  • Avoidance of proprietary or sensitive texts without consent
  • Ongoing bias and error monitoring

Outputs

Research outputs

  • Methodological working papers (forthcoming)
  • Comparative evaluation reports

Educational outputs

  • AI-assisted translation workshops and training modules
  • Structured exercises on post-editing, bias detection, and evaluation of AI translations
  • Demonstration cases based on Ukrainian–English translation scenarios

Public-facing outputs

  • Selected translated excerpts with methodological commentary

Partnerships

(in development)


Updates

  • 2025–2026 — Proof-of-concept model development and dataset curation
  • 2026 — Initial evaluation cycle and educational deployment in workshop contexts