Alvin Hans
Case Study // Flagship 03

Fintech Review ABSA

Applied ML for Indonesian fintech reviews, structured around risk, trust, and service rather than one coarse sentiment label.

AspectsRisk / Trust / Service
Experiment TracksBaseline + PEFT
SurfaceStreamlit Dashboard

The Strategic Problem

The Context

"The repository is a curated public version of a larger thesis and experimentation workspace, designed to keep the full logic of the workflow visible without bundling private datasets or heavy checkpoints."

A single overall sentiment score was too coarse for fintech reviews because the same review can talk about billing risk, platform trust, and service quality at the same time.

  • The Trade-off Map: Constrained by operational overhead, meaning pure theoretical accuracy was less important than practical reliability.
  • Constraint 01: Public release scope: the repo had to stay understandable without shipping raw private assets, large checkpoints, or machine-local experiment folders.
  • Constraint 02: Noisy Indonesian review text: preprocessing and dataset reconciliation matter before model comparison becomes meaningful.
System Blueprints

The Diamond Centerpiece

Stage 01Data pipeline
Stage 02Inference and training
Stage 03Evaluation layer
Stage 04Delivery surface
Architecture Map: Multi-stage pipeline optimized for throughput and relevance.

Technical Rationale

Core Approach

Built the repository around a practical loop: preprocess noisy Google Play reviews, run aspect-based inference, compare baseline and PEFT tracks, and surface the outputs in Streamlit.

Outcome

The public repo now reads as a complete workflow instead of a model demo, with selected evaluation artifacts, a live dashboard surface, and reproducible entry points for inference and comparison.

Data pipeline

Google Play review collection, preprocessing, and dataset reconciliation for the active ABSA setup.

Inference and training

baseline and PEFT experiment tracks including LoRA, DoRA, AdaLoRA, and QLoRA for risk / trust / service prediction.

Evaluation Metrics

Quantitative Validation

Observation 01

The repo predicts three domain-specific outputs: risk, trust, and service, which is more useful than a single sentiment label.

Observation 02

Baseline and PEFT experiment tracks are both included, so the comparison story is built into the repository instead of implied.

Observation 03

The dashboard makes artifact inspection and live inference readable, which turns the project into a usable analysis surface rather than a notebook-only experiment.

Delivery & Reflections

ABSA framing matters because one review can contain multiple business signals, and collapsing everything into one polarity label hides that structure.

Good public ML repos do not need every checkpoint bundled if preprocessing, evaluation entry points, and summary artifacts are documented clearly.

For this project, the dashboard is part of the technical story because it shows how multi-aspect outputs become inspectable for non-model users.

Project Repository & Exploration