AI Assistant — Internal Review
Enter the access code to continue.

ADP+
ADP+ AI Assistant
ADP+ Assistant is a personalized work-life companion that helps user's understand their money, grow their careers, and improve their quality of life.

What should we call it?
Pick the name that best represents our AI assistant.
Noa
"Movement" in Hebrew
Feels human, warm, and approachable — like a friend who's good with money.
0votes
Atlas
The titan who carried the world
Says "we'll carry the weight of your finances for you."
0votes
Kai
"Prosperity" across cultures
Modern, global, and doesn't try to sound like a bank.
0votes

Vote log

No votes yet

Analytics

--
Monthly Active Users
Unique sessions this month
--
Chat Starts
% of MAUs who started 1+ chat
--
Chats / MAU
Avg chats per active user
--
Thumbs Down Rate
Of all rated responses
Coming Soon
NQS — Noa Quality Score
NQS = (Depth × 0.30) + (Resolution × 0.50) + (Execution × 0.20)
Composite metric scoring every conversation across three dimensions: how deeply users engage (Depth), whether Noa answered successfully (Resolution), and whether tool calls executed correctly (Execution). Score range 0–100. Download the full spec above.
Q1: >60 Q2: >68 Q3: >74 Q4: >80
This dashboard answers
Who is adopting AI within my team?
Is my team shipping quality work efficiently?
Is AI actually helping — or just inflating volume?
Output Score
74.2
▲ 8% vs last month
312
Total Output
18.7
PRs / Engineer
62 / 38
Product / Infra %

Output Score is a normalized unit of engineering work. Instead of counting lines of code or story points, it asks: "How long would this task take an expert engineer?"

How it's calculated:

  • Every PR is evaluated across three dimensions: code complexity, issue resolution difficulty, and codebase impact
  • The model is trained on thousands of expert-graded PRs using a combination of LLMs, domain-specific ML, and reinforcement learning
  • Scores are calibrated so they're comparable across individuals, teams, languages, and repositories

Why it matters: Traditional metrics (lines of code, commit count, PRs merged) reward volume over value. Output Score measures meaningful effort — a 10-line fix to a critical bug scores higher than a 500-line boilerplate addition.

94% correlation to actual expert-assessed effort. The underlying formula is proprietary to Weave.

PR Cycle Time
2.4d
▼ 0.3d vs last month
First commit → merged (median)
2.1%
PRs Reverted
Rolled back after merge
▼ 0.4% ✓ Good
Prev: 2.5% Target: <3%
8.4
Review Quality
Depth, thoroughness, practicality /10
▲ 0.6 ✓ Good
Prev: 7.8 Target: >8.0
3.2d
Bug Lifecycle
Detected → traced → resolved (median)
▼ 0.8d ✓ Good
Prev: 4.0d Target: <5d
16
Team Members
81%
Using AI Tools
AI Usage by Team Member
S. Martinez
82%
R. Chen
76%
A. Patel
71%
J. Kim
68%
M. Johnson
64%
L. Garcia
45%
T. Nguyen
22%
AI Cost Over Time
Coming Soon
Coming Soon
Estimate delivery timelines with AI
Upload a deck or describe a concept, and the estimator will project how long it will take based on your team's current velocity, capacity, and historical output.
1
Upload slides or describe the concept
2
AI breaks it into work units
3
Get a timeline based on team velocity
This dashboard answers
Is our AI investment paying off?
How should I dollar cost average spend in headcount vs AI cost?
Who are my top performers for year-end promotions?
Net Savings from AI
+$127,400
▲ 18% vs last month
$184,200
Value from AI
$56,800
Total AI Cost
=
$127,400
Net Savings
Engineers:
Hourly rate:
$
/ hr
Value from AI = 340 engineers × $100/hr × 259 business days × 74% AI usage
AI Cost Over Time
340
Engineers on Platform
74%
Using AI Tools
AI Tool Adoption
Cursor
78%
Copilot
62%
Claude
41%
Devin
12%
Headcount Implications
AI output boost this quarter
19.2%
Equivalent engineering capacity
+65 engineers
At avg $200K/yr, that represents
$13.1M
Individual ROI Breakdown
View all →
Engineer
Output
AI Usage
ROI
1
S. Martinez
94
82%
3.1x
2
R. Chen
88
76%
2.8x
3
A. Patel
81
71%
2.4x
4
J. Kim
79
68%
2.2x
5
M. Johnson
72
64%
1.9x
Coming Soon
AI-powered headcount modeling
Model hiring scenarios against your real engineering output data. See how adding, removing, or reallocating engineers affects projected delivery capacity — factoring in ramp time, AI leverage, and historical team velocity.
1
Set target output or delivery date
2
Model scenarios (hire, redeploy, upskill)
3
Get projected capacity & cost impact
Coming Soon
Forecast AI tooling costs at scale
Project what your AI spend will look like as adoption grows. Simulate cost trajectories for different tool mixes, seat counts, and usage patterns — so you can budget accurately and negotiate vendor contracts with data.
1
Set adoption targets by tool
2
Simulate cost at 80%, 90%, 100% rollout
3
Compare ROI across tool portfolios
Coming Soon
Data-driven engineering performance reviews
Generate objective performance summaries for each engineer using output scores, code quality signals, review contributions, and AI leverage — replacing subjective reviews with calibrated, evidence-based evaluations that managers and reports can both trust.
1
Select engineer & review period
2
AI synthesizes output, quality & growth
3
Export review-ready summary with citations
History
Your conversations with Noa will appear here.
N
Hi, I'm Noa. What's on your mind today?