50 Data Engineering Discovery Call Questions to Ask Vendors (2026)

By Peter Korpak · Chief Analyst & Founder Verified Jun 5, 2026
data engineering discovery call questions vendor discovery questions rfp questions vendor interview procurement data vendor evaluation
50 Data Engineering Discovery Call Questions to Ask Vendors (2026)

These are 50 questions you (the buyer) ask the vendor during the discovery call — NOT interview questions for hiring an in-house data engineer. If you’re researching the latter, this is the wrong page.

The discovery call is the first substantive conversation after a vendor responds to your brief or shortlist request. Most buyers use it to hear a sales pitch. That’s a waste of an hour. If you walk in with a prepared list of pointed questions, the dynamic shifts — you’re auditing, not being sold to.

This list is structured for a 60-minute call. Each question includes a one-line rationale, what a good answer looks like, and what should make you wary. Use it alongside the data engineering vendor scorecard template and the data engineering due diligence checklist.


What is a data engineering discovery call?

A discovery call is a structured, time-boxed conversation where you — the buying organization — probe a vendor’s capability, methodology, and fit before committing to a paid pilot or full engagement. It is not a demo, not a reference check, and not a pitch. If the vendor spends more than 10 minutes presenting slides, you’ve lost control of the agenda.

The best discovery calls are asymmetric: the vendor talks roughly 70% of the time, but you control the topics entirely. Your job is to listen for specifics, flag vague answers, and score responses against a consistent rubric. That consistency is what makes it possible to compare vendors fairly. For the broader evaluation process, start with the data engineering partner selection hub.


How long should a data engineering discovery call take?

60 minutes. Shorter and you skim the surface. Longer and vendors start filling time with anecdotes. The table below shows how to allocate the hour.

TimeSectionWho LeadsOutput
0–5 minIntroductions + agenda confirmationBuyerShared agenda accepted
5–15 minReality-check questionsBuyerVendor’s unfiltered read of your brief
15–30 minArchitecture + platform questionsBuyerProposed approach, tooling rationale
30–42 minTeam, delivery model, pricingBuyerStaffing plan, cadence, rough cost shape
42–52 minSecurity, compliance, off-rampBuyerRisk exposure documented
52–60 minVendor’s questions + next stepsVendorFollow-up asks, timeline agreed

A vendor who asks zero questions in the final 8 minutes is either over-confident or not genuinely interested. Both are worth noting.


The 50 questions, organized by category

Fifty questions across eight categories. The compact format is deliberate — scan the list before the call, pick the 10–15 most relevant for your specific brief, and use the rest as probes if the vendor opens a door.

50 Data Engineering Discovery Call Questions by Category 50 Questions Across 8 Categories 50 questions Reality check (5) Architecture & platform (8) Team & continuity (6) Delivery model (6) Data quality & observability (5) Security & compliance (5) Pricing & commercials (8) Off-ramp & risk (7)

Reality-check questions (5)

These five go first, before any architecture discussion. They force the vendor to show you how carefully they read your brief — and how honest they’ll be when a project runs into trouble.

1. What’s the single biggest constraint we haven’t told you about that would stop this project? Why it matters: Tests whether they’ve read between the lines of your brief, not just the lines. Good signal: Names a specific technical or organizational constraint that fits your context (e.g., legacy CDC limitations, a data mesh initiative in flight). Red flag: “We don’t see any blockers” — this means they haven’t thought about it.

2. What in our brief feels under-specified or contradictory? Why it matters: Smart vendors find the gaps before they become change orders. Good signal: They cite a specific ambiguity with a concrete consequence — “You’ve listed real-time and T+1 in the same pipeline. Which takes priority when they conflict?” Red flag: “The brief looked solid to us.” No brief is ever fully solid.

3. If we hired you for this, what would you decline to do? Why it matters: Firms that claim they can do everything are telling you something useful about their judgment. Good signal: A clear boundary — “We don’t do BI front-end work” or “We’d pass on ML model development without a data science co-lead from your side.” Red flag: No answer, or “We’re pretty flexible on scope.” Scope flexibility without limits is a future change-order factory.

4. What’s a data engagement you walked away from in the last 12 months — and why? Why it matters: Tests commercial honesty and whether they have a functioning go/no-go process. Good signal: A specific answer with a principled reason — unrealistic timelines, misaligned success metrics, client unwilling to engage their IT team. Red flag: “We haven’t walked away from anything recently.” Every firm that does high-quality work declines some work.

5. Who else have we likely shortlisted, and how do you differ from them? Why it matters: Tests market awareness and self-knowledge. Good firms know where they sit in the landscape. Good signal: Names two or three plausible competitors and articulates a clear, specific differentiator — not “we’re more collaborative.” Red flag: “We don’t really track competitors” or a list of differentiators that apply to everyone (“certified experts,” “proven methodology”).


Architecture & platform questions (8)

This is where most of the call time should go. Vague architecture answers are a stronger warning sign than a vague price range.

6. Walk us through the architecture you’d propose, end to end. Why it matters: The first architecture sketch tells you whether they tailored a response to your brief or brought a template. Good signal: A specific proposal with named components, data-flow decisions explained, and at least one trade-off acknowledged. Red flag: A generic diagram that could apply to any company in any industry.

7. Why this architecture and not its closest alternative? Why it matters: Forces them to defend the design, not just describe it. Good signal: A clear “we chose X over Y because of your data volume / latency requirement / team maturity” — a reason specific to your situation. Red flag: “This is what we typically recommend.” Typical recommendations are fine for typical engagements. Yours probably isn’t typical.

8. What’s the single most expensive line item over 24 months? Why it matters: The answer reveals whether they’ve modeled cost realistically or are quoting low to win the deal. Good signal: A specific answer — “Snowflake compute at your projected query volume” or “Databricks DBUs during the ML training phase” — with a rough number. Red flag: Deflection to “it depends on usage.” That’s not an answer.

9. How would you handle a 10× data-volume surprise in month 6? Why it matters: Tests whether the architecture has headroom built in or is sized exactly for today. Good signal: Describes a concrete scale path — auto-scaling warehouse tiers, partitioning strategy changes, cost governance controls they’d trigger first. Red flag: “We’d need to revisit the architecture.” That’s a future change order, not a plan.

10. Snowflake Cortex vs Databricks Mosaic AI — when would you pick which? Why it matters: A litmus test for platform independence. The best firms have a genuine opinion based on use-case fit, not badge inventory. Good signal: A nuanced answer based on your workload type, team skill set, and whether you need a unified analytics+ML platform or a best-of-breed approach. Red flag: A reflexive preference for whichever platform they’re more heavily certified on without engaging with your specific context.

11. What’s your stance on dbt vs SQLMesh in 2026? Why it matters: SQLMesh has materially closed the gap on dbt. Firms that don’t have a current opinion are behind. Good signal: A clear view — “dbt Core for most transformations, SQLMesh worth evaluating if you need state-aware migrations or Python models at scale.” Red flag: “We use dbt” with no acknowledgement that SQLMesh exists. Or the reverse — “SQLMesh is clearly better” without a trade-off discussion.

12. How do you decide batch vs streaming for a given pipeline? Why it matters: Streaming is oversold. Buyers pay for streaming complexity they don’t need. Good signal: A decision framework — latency requirement, consumer tolerance for stale data, cost of the Kafka or Flink cluster vs a well-tuned batch job. Red flag: “Streaming is always better for real-time use cases.” That’s a sales pitch, not engineering judgment.

13. What does your reference architecture for our vertical look like? Why it matters: Firms with real depth in your industry have an opinionated starting point. Generalists don’t. Good signal: Names specific data sources, regulatory constraints, or modeling patterns common in your vertical — without you prompting them. Red flag: A blank stare or a slide pulled from a different industry deck with your logo swapped in.


Team & continuity questions (6)

The people doing the work matter more than the brand on the proposal. These six questions are designed to surface staffing risk before you sign anything.

14. Who specifically will work on this — names and LinkedIn URLs? Why it matters: The team named in the proposal should be the team on the project. Confirm they exist and are currently available. Good signal: Names, titles, and a clear explanation of who leads versus who supports — delivered without hesitation. Red flag: “We’ll finalize the team after contract signature.” That means you’ll get whoever is available, not whoever is best.

15. What percentage of the team will be substituted between proposal and project start? Why it matters: Some firms routinely promise senior resources in the pitch and rotate in junior staff post-signature. Good signal: “We aim for zero substitution. If a team member changes, we require your sign-off.” Red flag: A non-committal answer, or a policy where substitution is the firm’s call alone.

16. What’s the time-zone overlap with our team? Why it matters: A three-hour overlap window creates coordination tax that compounds over months. Good signal: A specific answer — “Our lead engineer is in Warsaw, you’re in Chicago, that’s 7 hours difference, but we flex to a 9am–11am CET overlap.” Red flag: “Time zones haven’t been a problem for us.” That’s not an answer to the question.

17. What’s the senior-engineer turnover rate on your data team in the last 12 months? Why it matters: High turnover at the senior level is often the first sign of a firm in trouble. Good signal: A specific number, with context — “We had two senior departures out of 40 engineers; both moved to in-house roles after we placed them on client teams” is different from “we had 15% attrition.” Red flag: “We don’t track that” or an obvious deflection.

18. If your lead engineer leaves mid-project, what’s your replacement protocol? Why it matters: The protocol tells you whether this is a managed risk or an improvised scramble. Good signal: A documented process — shadow period before departure, knowledge-transfer requirements, buyer sign-off on the replacement. Red flag: “It hasn’t happened yet.” It will.

19. What’s the team’s sub-contractor mix and disclosure policy? Why it matters: Sub-contractors are common and not inherently a problem. Undisclosed ones are a problem. Good signal: A clear percentage and a disclosure policy — “We use sub-contractors for up to 20% of hours; all sub-contractors are disclosed in the SOW and subject to the same vetting as our employees.” Red flag: “We don’t typically share that information.”


Delivery model questions (6)

How a firm operates week-to-week determines whether the engagement feels like a partnership or a managed service you’re excluded from.

20. What’s the cadence — daily standup, weekly demo, monthly steering? Why it matters: A mismatch in meeting cadence creates friction from day one. Good signal: A specific proposal — daily async update, weekly sprint review, bi-weekly steering — with flexibility to adjust. Red flag: “We adapt to whatever the client wants.” No strong opinion on delivery structure usually means no strong delivery discipline.

21. How do you run sprints with a client team that doesn’t want daily standups? Why it matters: Tests adaptability. Some enterprise teams have genuine constraints on synchronous time. Good signal: An async alternative — Loom updates, a shared Notion board, a weekly digest — that they’ve used before. Red flag: “We really need daily standups to operate.” Possibly true, but if they can’t adapt, they’ll create friction in a constrained client environment.

22. What’s a typical decision your team makes without consulting us? Why it matters: Defines the boundary of vendor autonomy. You want partners who can operate independently on small decisions and escalate on consequential ones. Good signal: A clear line — “We make tool-version and library-choice decisions without escalation; we always escalate architecture changes, cost spikes above 15%, or data access requests.” Red flag: “We escalate everything to the client.” That’s a high-maintenance partnership that will slow delivery.

23. How do you measure delivery — story points, tickets, outcomes? Why it matters: Story points measure team velocity. Outcomes measure business value. These are very different things. Good signal: A hybrid — “We track tickets for project management; we track agreed outcomes (pipeline SLA uptime, query latency, data quality score) for client reporting.” Red flag: Story-point reporting with no outcome link. That’s a busy team that may not be solving your problem.

24. What’s your default ticketing and project-management tool, and can you adopt ours? Why it matters: Running parallel systems creates coordination overhead and makes it harder to audit delivery. Good signal: A preferred tool with a clear “yes, we can work in Jira/Linear/etc.” and a note on what information they’d need mirrored. Red flag: “We have our own system and it’s too complex to migrate.” That’s a visibility problem for you.

25. How do you handle scope creep mid-sprint? Why it matters: Scope creep is universal. The process for handling it tells you whether you’ll be in constant change-order negotiation. Good signal: A documented process — new requests go into backlog, get prioritized in next sprint planning, generate a change order only above a defined threshold. Red flag: “We try to be flexible.” Flexibility without a process is just a future billing dispute.


Data quality & observability questions (5)

Data quality failures are the most common reason engagements go sideways after go-live. Ask these before you see a single pipeline.

26. How do you define “production-ready” for a data pipeline? Why it matters: If they can’t define it, they can’t hit it. Good signal: A multi-dimensional definition — “Passes all dbt tests, has observability instrumentation in place, documented SLA, a defined on-call runbook, and sign-off from the data consumer.” Red flag: “When it’s running cleanly in production.” That’s not a definition.

27. What observability tools do you instrument by default — Monte Carlo, Bigeye, Anomalo, Datafold? Why it matters: Observability is no longer optional for production data platforms. Which tool they default to tells you about their maturity level. Good signal: A specific tool choice with a rationale — “We default to Monte Carlo for Snowflake workloads; for dbt-heavy shops we often layer in Datafold for regression testing on transformations.” Red flag: “We use custom monitoring scripts.” Viable for greenfield, a concern if you’re running at scale.

28. How do you structure data contracts between teams? Why it matters: Data contracts are a relatively new operational pattern. Firms ahead of the curve have a real answer; firms behind it will talk about documentation instead. Good signal: A description of schema contracts, ownership metadata, SLA fields, and how violations surface — ideally referencing a framework they’ve implemented. Red flag: “We document the schemas in Confluence.” Documentation is not a contract.

29. What’s your dbt test coverage target? Why it matters: No target means no enforcement. Low targets mean production surprises. Good signal: A specific target — “We target 80% test coverage on critical models; 100% on models feeding external-facing dashboards or ML features.” Red flag: “We write tests for the important stuff.” That’s a judgment call made at 4pm on a Friday under delivery pressure.

30. How do you handle a silent data-quality regression? Why it matters: Silent regressions — where data looks correct but isn’t — are the most dangerous failure mode. Good signal: A combination of statistical anomaly detection, volume checks, freshness checks, and a defined escalation path when thresholds breach. Red flag: “We’d catch it in testing.” Silent regressions, by definition, pass tests.


Security & compliance questions including EU AI Act (5)

Regulatory exposure flows downstream from vendor practice. These five questions are non-negotiable for any regulated-industry buyer. Note that EU AI Act enforcement for high-risk AI systems began August 2026 — any vendor touching ML pipelines for EU data subjects should have a current position on this.

31. SOC 2 Type II plus ISO 27001 — current and last audit date? Why it matters: Certifications expire and can be gamed. The date matters as much as the status. Good signal: Both certifications current, audit date within 12 months, and willingness to share the attestation letter. Red flag: “We’re working toward SOC 2.” That means you’d be their first regulated client.

32. How do you handle PHI and PII in development environments? Why it matters: Most data breaches happen in dev, not prod. Good signal: Automated PII detection, synthetic data generation for dev environments, no production data in CI/CD pipelines without explicit approval and masking. Red flag: “We anonymize data before it goes to dev.” Follow up: “How?” If the answer is column-drop rather than tokenization or synthetic replacement, probe further.

33. Walk me through your model risk-management protocol for high-risk AI systems under the EU AI Act. Why it matters: EU AI Act Regulation 2024/1689 applies to high-risk AI systems handling EU data subjects. Firms building ML pipelines in this scope need a current compliance position, not a vague one. Good signal: Reference to their internal AI governance framework, risk classification process, conformity assessment approach, and how they maintain the technical documentation required under Article 11. Red flag: “We’re not sure that applies to our work.” If they’re building recommendation systems, credit-scoring pipelines, or HR tooling for EU entities, it almost certainly does.

34. What’s your approach to data residency for EU-regulated workloads? Why it matters: Data residency requirements under GDPR and sector-specific regulation (e.g., financial services) constrain which cloud regions and third-party services are permissible. Good signal: A specific answer — “We deploy to eu-west-1 by default for EU clients; we flag any third-party service that processes data outside the EU for your legal review.” Red flag: “Data residency hasn’t come up for us.”

35. What was your last security incident, and what changed because of it? Why it matters: Every firm has had an incident. What they learned from it tells you more than their policies. Good signal: A candid description of what happened, impact, remediation, and a specific process change that resulted. Red flag: “We haven’t had any security incidents.” Implausible for any firm older than two years.


Pricing & commercials questions (8)

Price is the dimension buyers think they understand and vendors know they control. These eight questions close the information asymmetry.

36. What’s your blended rate, and what does “blended” mean to you? Why it matters: “Blended rate” means different things. Some firms blend across all geographies; others blend only within a project team. The definition determines whether the number is meaningful. Good signal: A specific number ($175–$250/hr is typical for a mid-market US/EU firm in 2026) with a clear explanation of how roles are weighted. Red flag: A blended rate with no breakdown. Ask for the rate card behind the blend.

37. What’s your change-order rate, and is there a cap? Why it matters: Change-order rate (change orders per 100 hours of work) is a proxy for how well they scope projects upfront. Good signal: A low rate with a transparent cap — “We’ve averaged 0.8 change orders per project; we cap change-order margin at 15% above the original SOW without re-scoping.” Red flag: “It depends on the project.” That’s not an answer.

38. Who owns the IP, and is there any co-IP language in your standard MSA? Why it matters: Ambiguous IP ownership is a serious legal risk, especially for reusable frameworks or proprietary models built during the engagement. Good signal: A clear statement: “Work product is yours. We retain rights to our pre-existing IP (internal frameworks, accelerators) but license it to you royalty-free for the deliverables.” Red flag: Co-IP language that gives the vendor joint ownership of anything you commission. Push back on this clause.

39. What’s your typical milestone payment structure, and is there a holdback? Why it matters: A holdback — typically 10–20% held until final delivery — aligns the vendor’s financial incentive with your satisfaction. Good signal: A milestone structure with a holdback, and clear definitions of what “milestone achieved” means for each payment trigger. Red flag: Full payment on time rather than delivery, or milestones defined so loosely that the vendor can self-certify completion.

40. What’s the total cost of a 4-week paid pilot, and is it credit-forward? Why it matters: A paid pilot is the right way to de-risk a new vendor relationship. Whether the cost credits toward the full engagement tells you how confident the vendor is in their own delivery. Good signal: A specific pilot price, clear deliverables, and explicit credit-forward terms in writing. Red flag: No pilot option, or a pilot that doesn’t credit forward. The latter signals they treat the pilot as a revenue event, not a proof of fit.

41. What pricing models do you support — T&M, fixed-bid, outcome-based? Why it matters: Pricing model choice shifts risk. T&M favors the vendor; fixed-bid favors the buyer if scoped correctly; outcome-based aligns incentives but requires clear metric definitions. Good signal: Experience with all three, plus a genuine view on which model fits your brief — and why. Red flag: “We only do T&M.” That tells you something about how they manage scope risk.

42. Where in your standard contract is the off-ramp clause? Why it matters: Knowing where the off-ramp lives before you sign is basic buyer hygiene. See the data engineering RFP mistakes guide for why this gets overlooked. Good signal: They know the clause number and summarize it accurately — termination window, notice period, payment obligations on exit. Red flag: “I’d have to check with legal.” A vendor who doesn’t know where the off-ramp is in their own MSA is not prepared.

43. What’s the smallest engagement you take on, and what’s the largest? Why it matters: A firm whose minimum engagement is $500K will struggle to care about a $150K project. A firm whose maximum is $300K may not have the depth for a $2M program. Good signal: A range that includes your project size with evidence of similar-scale delivery. Red flag: Your project is at either extreme of their range. That’s a fit problem, not a price problem.


Off-ramp & risk questions (7)

Most buyers think about off-ramps after something goes wrong. Ask these questions before you sign.

44. If we terminate at month 6, what does the 30-day off-ramp look like? Why it matters: A clear off-ramp process means someone has thought about it. An improvised answer means they haven’t. Good signal: A described handover sequence — knowledge transfer, documentation delivery, access revocation schedule, final invoice calculation. Red flag: “We’d work it out with you at the time.” That’s a negotiation you’ll be having from a weak position.

45. What documentation do we receive at off-ramp? Why it matters: Undocumented systems are unusable systems. Know the documentation package before you’re holding it. Good signal: A specific list — architecture decision records, pipeline runbooks, data dictionaries, infrastructure-as-code repositories, CI/CD pipeline documentation. Red flag: “Whatever we’ve produced during the engagement.” That’s not a commitment; it’s a shrug.

46. What happens to your team’s access on day 1 of off-ramp? Why it matters: Access revocation is a security requirement, not a formality. Good signal: Immediate revocation of non-essential access on notice, with a documented process for the residual access needed to complete handover tasks. Red flag: A vague answer or surprise at the question. Access control at off-ramp should be a standard part of their engagement model.

47. What’s the largest project you’ve shipped that ended in the last 12 months — and how did the handover go? Why it matters: How they describe the handover of a completed project reveals how much they invest in client self-sufficiency versus dependency. Good signal: A specific project with a described handover — “The client’s team could run all pipelines independently within two weeks of go-live; we provided 30 days of support SLA.” Red flag: “The client still uses us for ongoing support.” That’s fine if it’s elected; it’s a problem if it’s dependency by design.

48. Have you ever been removed from an engagement? What did you learn? Why it matters: Every firm that’s been in business long enough has been removed from something. The answer to “what did you learn” is the useful part. Good signal: A candid answer with a specific lesson applied to their current practice. Red flag: “That hasn’t happened to us.” Statistically improbable for any firm older than five years.

49. What’s the cyber-insurance limit on your standard MSA? Why it matters: Your exposure in the event of a vendor-side data breach is partially bounded by their insurance limits. Good signal: A specific limit ($1M–$5M is typical; enterprise buyers should ask for $5M+) and a willingness to share the certificate. Red flag: “I’d need to check.” This should be in the standard MSA. If they don’t know, it’s probably low.

50. What’s the one clause we should push back on in your contract? Why it matters: A vendor willing to name their most buyer-unfriendly clause is a vendor you can trust more than one who pretends the contract is standard. Good signal: A specific clause — often an automatic renewal provision, a limitation-of-liability cap that’s too low, or broad IP language — with an explanation of why it’s there and what a reasonable negotiated position looks like. Red flag: “Our contract is pretty standard.” Every contract has at least one clause that favors the drafter.


What if you only have 30 minutes? (The top 10)

If you have a hard 30-minute limit, strip the list to these ten. They cover the five dimensions that most frequently determine whether a data engineering engagement succeeds or fails: brief comprehension, architecture judgment, team reality, commercial transparency, and exit terms.

  1. What in our brief feels under-specified or contradictory? (brief comprehension)
  2. Walk us through the architecture you’d propose, end to end. (technical depth)
  3. What’s the single most expensive line item over 24 months? (cost realism)
  4. Who specifically will work on this — names and LinkedIn URLs? (staffing reality)
  5. What percentage of the team will be substituted between proposal and project start? (bait-and-switch risk)
  6. How do you handle a silent data-quality regression? (delivery maturity)
  7. Who owns the IP — and is there any co-IP language in your standard MSA? (commercial risk)
  8. What’s the total cost of a 4-week paid pilot, and is it credit-forward? (de-risking)
  9. What was your last security incident, and what changed because of it? (security culture)
  10. What’s the one clause we should push back on in your contract? (negotiation honesty)

Frequently asked questions

What is the most important data engineering discovery call question?

There’s no single most important question because what matters depends on your biggest unresolved risk. If you don’t yet know what your biggest risk is, start with question 2 — “What in our brief feels under-specified or contradictory?” — because the answer tells you where to focus the rest of the call.

Should you send the questions to the vendor in advance?

For categories 1 through 5 (reality check, architecture, team, delivery, data quality), no. You want unscripted answers. For categories 6 through 8 (security, pricing, off-ramp), sending them 48 hours in advance is reasonable — those answers require internal research, and a vendor who can’t answer them without preparation is telling you something.

How is a discovery call different from a sales pitch?

A sales pitch is vendor-controlled — they present, you react. A discovery call is buyer-controlled — you ask, they answer, and silence or deflection is data. The structural difference is who sets the agenda. Send your agenda in advance and state clearly that you’ll be asking specific questions rather than hearing a presentation.

Can AI conduct the discovery call for you?

AI can help you prepare the question set, score responses against a rubric after the call, and flag contradictions between what a vendor says in discovery and what appears in their MSA. It cannot replace the judgment call you make when a vendor gives a technically correct answer with obvious discomfort — or answers your off-ramp question without hesitation because they’ve done it before. The tell is in the pause.


Next step

If you’re at the stage of running discovery calls, you’re past the RFP phase and ready to score. Use the data engineering vendor scorecard template to turn call notes into a defensible ranking, and the data engineering due diligence checklist for the verification work that happens after discovery.

For the prior step — how to structure your RFP process before you ever get on a call — see RFP process best practices.

If you’d rather skip the process and get matched directly to vetted firms, start here.

Also in this cluster: vendor evaluation criteria and PoC scoping guide.

Peter Korpak · Chief Analyst & Founder

Data-driven market researcher with 20+ years in market research and 10+ years helping software agencies and IT organizations make evidence-based decisions. Former market research analyst at Aviva Investors and Credit Suisse.

Previously: Aviva Investors · Credit Suisse · Brainhub · 100Signals

Related Analysis