top of page

AI Summarization

Improvement Focus:

Improve work efficiency for enterprise clients with AI-powered product innovation (LLM, NLP).

​

My Responsibility:

* Collaborated with ML engineers, traditional engineers, designers, Financial Analyst (Editorial), Annotators (DataOps)

* Defined multiple guidance throughout the product lifecycle

* Presented project progress to senior leadership

Context

As one part of Research Center 2.0 user research on ideal research user experience, pain points of difficulty in accessing relevant information have been raised. Meanwhile, many competitors have been releasing AI-powered features to help improve such user experience, and have displayed some proven product market fit.

Objectives

Build an AI-driven summarization service to deliver productivity and enhanced user experience to our clients' research and valuation workflows.

Success Metrics

Model Performance (launch criteria, proposed by ML team):

  • Factual consistence (accuracy)

  • Relevancy

  • Sentiment accuracy

  • Readability/Length

  • Overall impression

Model Performance (post-launch monitoring):

  • Quality related user feedback (thumbs up/down)

Feature Performance (Increase usage of transcript page)

  • MAU in transcript tab

  • Average session time in transcript tab

Increased 36% Research Center MAU and 94% monthly views and downloads in 6 months since launch.

Roadmap

AI Summarization Roadmap.png

Pain Points

- Difficulty in quickly accessing key insights for their valuation process

- Overwhelmed by the sheer volume of information in transcripts and research reports, leading to inefficiencies in pinpointing

Solutions

AI Insights Panel in Transcripts

Approaches - Work with Ambiguity

Given the inherent uncertainty throughout the AI/ML product development lifecycle, I established clear definitions for model output to articulate what we're building, model performance to measure how well it's working, and annotation metrics to ensure the reliability of our training data.

Define Expected Model Output

The "product requirements" that can't be captured in Design or traditional dataset requirements.

Model Output 1.png
Define Success Metrics on Model Performance

PM defines the priority of the metrics while ML engineers suggests range of the metrics.

Define Evaluation Metrics for Annotation

A modification to model intended to resolve one issue might inadvertently introduce new problems in areas that were previously functioning well.

Key Takeaways

1. The Unique AI Project Lifecycle
  1. PM provides product requirements/PRD

  2. ML team builds a POC based on product requirements

  3. Model training & evaluation: incorporate evaluation feedback to continuously improve model outputs

  4. "Productionize the model": ML team builds and API for the model

  5. Product development: integrate model API with existing platform services, develop BE and FE components

  6. Use real input data to product AI-generated contents

  7. Post-launch human oversight & correction, feedback data collection, and model enhancement

Other Projects...

Auth0

Migration

12-Month

Roadmap

bottom of page