Feature · Hey Atlantic

Hands-free integration assistant. Voice-first.

“Hey Atlantic, brief me on the CFO before my 3 pm.” Voice-readable briefings, conversational queries, and live project insight while driving, walking the walls, or between back-to-back meetings.

Voice on Independent Plus and above. iOS, Android, web. 15 languages.

Where voice actually wins

Three concrete moments where reading a screen is the wrong UI.

Driving between meetings

"Hey Atlantic, brief me on the CFO before my 3 pm." Voice-readable briefing in 90 seconds while you drive — eyes on the road, hands on the wheel.

Walk-the-walls during cutover

Day 1 cutover floor, server room, customer site walk-throughs. Capture observations, log issues, query workplan — without holding a laptop.

Between back-to-back meetings

"What did the CFO say about the synergy targets last week?" Recall prior commitments, open questions, and sentiment without scrolling through notes.

What voice can actually do

Wake-word activation. Conversational context. Multi-language. Privacy-respecting by design.

Wake-word activation

"Hey Atlantic" — on-device wake-word detection. No always-listening cloud upload; activation only after the wake word fires.

Voice-readable briefings

Stakeholder briefings, workstream status, RAID logs, synergy progress — generated as 60–120-second audio briefings designed for listening, not reading.

Conversational context

Multi-turn queries: "Brief me on the CFO" → "What about his concerns?" → "Who else feels the same?" The co-pilot keeps context across the conversation.

15 languages

Voice queries and briefings in 15 languages. Brief a German subsidiary stakeholder in German; brief the US sponsor in English; same source of truth.

Voice access by tier

Independent Plus and above. See full pricing.

TierVoiceNotes
IndependentText co-pilot only
Independent PlusFull voice access
ProFull voice access for all 10 users
Pro ScalingFull voice access for all 25 users
EnterpriseFull voice + custom AI voice add-on available

Common questions

Does voice listen all the time?

No. On-device wake-word detection runs locally on your phone or laptop and activates only when "Hey Atlantic" is spoken. Audio is uploaded to the cloud only after activation, processed for the query, and discarded. We do not retain voice audio beyond the immediate query.

Does the voice feature use my speech data to train shared models?

No. Voice queries are processed via AWS Transcribe within your tenant boundary. Audio is not retained beyond the immediate query and not used for training shared models. Aggregated query patterns (anonymised, no audio) feed product improvement.

Which voice does it use? Can I customise it?

Default voice is a region-aware neutral synth via AWS Polly. Custom AI Voice (TTS) is an add-on ($165/mo or $1,980/yr) — pick from a curated voice library or supply a brand voice for white-label deployments. Custom AI Assistant Naming pairs with this for full white-label.

Does it work without internet?

Wake-word detection works offline. Briefings and queries require connectivity to the AWS Bedrock backend. No offline-first mode currently — we recommend voice for hands-free use, not as a fallback for connectivity.

Is this just Siri / Alexa wrapped?

No. Voice is integrated with the stakeholder graph, project state, and Atlantic Intelligence — so a query like "brief me on the CFO" pulls live data from your tenant. Generic voice assistants don't have that context.

Eyes on the road. Hands on the wheel. Brief in 90 seconds.

14-day free trial. Voice on Independent Plus and above. SOC 2 Type II in progress · 75 patents filed.

Start free trial