Nuance’s on-prem and hosted speech stack is approaching end of life, with multiple partner notices confirming that hosted support will end in December 2025 and on-prem sustaining support around June 2026.
Industry coverage, including CX Today, highlights the broader shift away from Nuance’s on-premise contact-center solutions, with product-family end-of-support dates in 2027–2028.
If your IVR still depends on Recognizer, Vocalizer, or Dialog, you need a plan that protects customer experience while you modernize.
This guide looks at how to exit Nuance with minimal risk by stabilizing text-to-speech (TTS) first, then sequencing everything else on your terms.
Why a TTS-first approach wins
During migration, TTS is often the first step to stabilize prompts, menus, and transactional updates, because it can be replaced independently without disrupting call logic or customer flows.
ReadSpeaker delivers Neural TTS for Conversational AI in multiple footprints so you can fit architecture to constraints: MRCP v2 for standards-based IVR, on-premise deployments for data residency and predictable cost, embedded for deterministic latency at the edge, and cloud-based services for rapid iteration or burst traffic.
The result is a reliable foundation for conversational AI flows, voice AI experiences, and downstream analytics without forcing you into a single vendor’s roadmap.
We also recognize the broader ecosystem.
Some organizations will modernize within on-prem environments, keeping their TTS, ASR, and NLU infrastructure under direct control for compliance and predictability, rather than moving to the cloud.
Others may consolidate on Microsoft tooling or favor AWS components.
ReadSpeaker integrates cleanly across platforms—from on-prem conversational stacks to cloud-based AI frameworks—so you can sequence decisions instead of making them all on day one.
ReadSpeaker provides Neural TTS with deterministic voice engines designed for predictable, consistent output.
Unlike generative TTS, which is prone to hallucinations, resource-intensive in the cloud, and less reliable for compliance. Neural TTS ensures safety, stability, and brand-aligned delivery across channels.
What’s changing
- December 2025 → Hosted support ends in several notices
- June 2026 → On-prem sustaining support commonly cited as ending
- 2027–2028 → Product-family end-of-support endpoints
Dates vary by product, route to market, and deployment, so the recommendation is to confirm your contractual terms in writing and plan as if the earliest date applies.
TTS Architecture quick guide (constraint → choice)
- IVR requires MRCP v2 (Avaya, Cisco, Genesys, Mitel) → ReadSpeaker speechServer MRCP for a standards-based drop-in to existing on-premise IVR applications, with data residency & predictable cost.
- Offline or deterministic latency → ReadSpeaker speechEngine SDK for desktop and server-based applications, or ReadSpeaker speechEngine SDK Embedded for edge devices, kiosks, and automotive use cases.
- Bursty/global scale or fast iteration → ReadSpeaker speechCloud API, a cloud-based option.
- Consistent brand voice → Custom voice by ReadSpeaker VoiceLab.
This mix remains scalable, cost-effective, and compatible with your preferred Conversational AI platform choices.
The recommended 90-day plan
Weeks 1–2 — Audit & sizing
Create a single inventory and constraints brief:
- Components: Nuance versions, MRCP version, ports/channels, grammars, SSML, dictionaries/lexicons, audio formats, languages/voices, and licensing clocks.
- Constraints: on-premise vs. Virtual Private Cloud (VPC) residency, target first-audio latency per call type, availability goals, and budget model.
- Voice mapping: shortlist TTS voices and record “must-match” pronunciations.
- Integrations: list where speech recognition, transcription, chatbots, and analytics connect so cutover won’t break workflow reporting.
Output: inventory sheet, constraints summary, voice map, risk register for internal publication.
Weeks 3–6 — Deploy in parallel
Deploy your ReadSpeaker choice and run side-by-side with Nuance for a subset of calls:
- Regression: prompts, barge-in timing, SSML events, audio levels/codecs, MRCP event logging, real-time alerting, etc.
- Observability: dashboards for latency, error rates, and synthesis failures; tune alerts for the busy hour.
- Experience checks: multilingual menus and user experience reviews in noisy environments.
Output: test plan results, observability dashboards, rollback criteria. Share to build confidence across CX, Operations, and IT.
Weeks 7–10 — Phased cutover & tuning
Increase traffic in phases, with rollback procedures tested in advance:
- Prosody: tune rate/pitch/volume and dictionary entries; verify voice bot clarity in noisy environments.
- Analytics: track latency, synthesis errors, and customer experience feedback. Fix fast and document residual risks.
- Channels: ensure a consistent brand voice across all omnichannel platforms—IVR, kiosk, and mobile.
Output: tuning log, KPI snapshots, mitigation list. Share findings across CX, IT, and Operations teams to support alignment and fast iteration.
Weeks 11–12 — Finalize & secure
- Remove Nuance dependencies. Finalize monitoring/alerts and backups.
- Update runbooks and incident playbooks. Validate success criteria and sign off.
- Capture lessons learned and share an update to align with internal stakeholders and partners, and boost customer trust.
Coordinate TTS, ASR, and Customer Experience
This guide is TTS-first, but you may also be exiting speech recognition infrastructure.
You can replace automatic speech recognition and TTS in parallel or sequentially.
Many teams stabilise TTS first to protect prompts and menus, then finalise their ASR deployment with their chosen vendor or CCaaS/IVR provider.
ReadSpeaker TTS integrates with third-party ASR/NLU solutions, AI platform components, and chatbots using MRCP v2, so you can keep your self-service flows running while you evaluate conversational AI vendors.
If you rely on transcription for QA or compliance, make sure its data paths remain intact during cutover, along with any related automation triggers.
Security, governance, and residency
Use migration to strengthen controls:
- Network & security: ensure MRCP connections are configured according to your organisation’s security and governance policies.
- Logging & auditing: define retention and access policies that meet your regional compliance requirements.
- Residency: keep synthesis and logs within your chosen region through on-premise deployments.
- Future-proofing: choose deployment options that allow you to move between on-premise and cloud-based modes without re-authoring your IVR applications.
Business outcomes to measure
Define what success looks like:
- Latency: measure first-audio performance by call type — on-premise deployments are generally fastest, embedded engines provide deterministic timing, and cloud-based streaming can be validated against your network conditions.
- CX: track intelligibility in noise, barge-in responsiveness, brand tone, and customer satisfaction trends.
- Engagement: monitor menu completion rates and how customers interact with new self-service intents.
- Operations: track incident response and resolution times, confirm reliable failover, and aim for steady improvement in service stability.
Where AI fits—safely
Teams often ask how AI factors into a TTS-first migration.
The short answer: keep TTS as your governed, deterministic layer.
It provides the stability and consistency that conversational or generative AI tools can’t guarantee.
You can use AI to support design and testing, for example, to audit scripts or simulate user flows. But production speech should always rely on neural, non-generative TTS for predictable pronunciation, compliance, and brand tone.
ReadSpeaker TTS technology ensures your voice layer remains controlled and consistent, even as you experiment with new automation or conversational tools on separate tracks.
Functionalities that make your migration smoother
- MRCP v2 support for standards-based IVR integration across platforms like Avaya, Cisco, Genesys, Mitel, etc.
- SSML support—including say-as, breaks, emphasis, and prosody controls — for clear and natural-sounding prompts.
- Custom dictionaries and lexicons to handle medical, financial, and brand-specific terminology with precision.
- Multi-voice and multi-language options to scale consistently across omnichannel experiences.
- Flexible deployment modes—on-premise, cloud-based, hybrid, embedded—to adapt to changing customer and compliance needs without reauthoring.
- Straightforward interfaces for reporting and workflow integration so analytics and automation tools stay connected.
ReadSpeaker’s functionalities help you migrate faster, maintain voice quality, and avoid surprises during deployment.
They’re backed by a highly skilled and responsive ReadSpeaker support team, available throughout planning, deployment, and ongoing operations.
What to do now
Start with a migration blueprint: our team reviews your current stack, maps MRCP compatibility, and recommends an architecture that’s scalable, predictable in cost, and aligned with your security and data residency requirements.
Then work with our team to plan sizing, cutover, and deployment.
Talk to our team today to start your ReadSpeaker migration plan.
Talk to our team