Fintech Review ABSA
Applied ML for Indonesian fintech reviews, structured around risk, trust, and service rather than one coarse sentiment label.
The Strategic Problem
The Context
"The repository is a curated public version of a larger thesis and experimentation workspace, designed to keep the full logic of the workflow visible without bundling private datasets or heavy checkpoints."
A single overall sentiment score was too coarse for fintech reviews because the same review can talk about billing risk, platform trust, and service quality at the same time.
- The Trade-off Map: Constrained by operational overhead, meaning pure theoretical accuracy was less important than practical reliability.
- Constraint 01: Public release scope: the repo had to stay understandable without shipping raw private assets, large checkpoints, or machine-local experiment folders.
- Constraint 02: Noisy Indonesian review text: preprocessing and dataset reconciliation matter before model comparison becomes meaningful.
The Diamond Centerpiece
Technical Rationale
Core Approach
Built the repository around a practical loop: preprocess noisy Google Play reviews, run aspect-based inference, compare baseline and PEFT tracks, and surface the outputs in Streamlit.
Outcome
The public repo now reads as a complete workflow instead of a model demo, with selected evaluation artifacts, a live dashboard surface, and reproducible entry points for inference and comparison.
Data pipeline
Google Play review collection, preprocessing, and dataset reconciliation for the active ABSA setup.
Inference and training
baseline and PEFT experiment tracks including LoRA, DoRA, AdaLoRA, and QLoRA for risk / trust / service prediction.
Quantitative Validation
The repo predicts three domain-specific outputs: risk, trust, and service, which is more useful than a single sentiment label.
Baseline and PEFT experiment tracks are both included, so the comparison story is built into the repository instead of implied.
The dashboard makes artifact inspection and live inference readable, which turns the project into a usable analysis surface rather than a notebook-only experiment.
Delivery & Reflections
ABSA framing matters because one review can contain multiple business signals, and collapsing everything into one polarity label hides that structure.
Good public ML repos do not need every checkpoint bundled if preprocessing, evaluation entry points, and summary artifacts are documented clearly.
For this project, the dashboard is part of the technical story because it shows how multi-aspect outputs become inspectable for non-model users.