ADSMOD is a comprehensive web application designed for the collection, management, and modeling of adsorption data. This project represents the evolution and unification of two predecessor projects: ADSORFIT and NISTADS Adsorption Modeling (the former name of this repository).
By merging the capabilities of these systems into a single, cohesive platform, ADSMOD provides a robust workflow for researchers and material scientists. The application allows users to:
- Collect adsorption isotherms from the NIST Adsorption Database.
- Enrich material data with chemical properties fetched from PubChem.
- Build curated, standardized datasets suitable for machine learning.
- Train and Evaluate deep learning models to predict adsorption behaviors.
The system is organized as a modern web application with a responsive user interface and a backend focused on data processing and machine learning tasks.
Work in Progress: This project is still under active development. It will be updated regularly, but you may encounter bugs, issues, or incomplete features.
This project utilizes deep learning techniques to model adsorption phenomena.
- Model: The core learning capability is based on the SCADS model architecture.
- Learning: The system relies on Supervised Learning, using historical experimental data to train predictive models.
- Dataset:
- Primary Source: Experimental adsorption isotherms from the NIST Adsorption Database.
- Enrichment: Chemical properties (for example molecular weights and SMILES strings) from PubChem.
- The application handles fetch, cleanup, and merge steps to produce training-ready datasets.
ADSMOD provides an automated installation and launcher script for Windows users.
- Navigate to the
ADSMODdirectory. - Run
start_on_windows.bat.
What this script does:
- Downloads portable Python, uv, and Node.js runtimes into
runtimes/(first run only). - Installs backend dependencies from
pyproject.tomlintoapp/server/.venv. - Installs frontend dependencies and builds the frontend bundle.
- Starts backend and frontend.
First Run vs. Subsequent Runs:
- On the first run, setup may take time because runtimes and dependencies are downloaded.
- On subsequent runs, launch is faster because setup is reused.
If you prefer manual setup or are running outside the launcher workflow:
- Install Python and Node.js.
- Install backend dependencies from the app/server.
- Install frontend dependencies in
app/client. - Launch backend and frontend processes.
Windows:
Double-click start_on_windows.bat. This launches backend/frontend and opens the UI at http://<UI_HOST>:<UI_PORT> from settings/.env.
Windows (Packaged Tauri App):
Build with release\tauri\build_with_tauri.bat, then launch from release/windows/installers or release/windows/portable.
Both local web mode and packaged Tauri mode use the same runtime file:
settings/.env
Adjust host/port and runtime backend values in that file when needed.
The application workflow is organized into three top navigation tabs: source, fitting, and training.
The snapshots below were captured from the current develop build (v2.3.0 release preparation) and are intended to show representative product states without duplication.
- Upload local
.csvor.xlsxadsorption data. - Collect and enrich adsorption data from NIST-A.
- Monitor ingestion and enrichment progress from the UI.
Source tab: upload local datasets, review sample/size metadata, and run NIST-A collection tools.
- Select a dataset (uploaded or NIST).
- Configure optimizer settings and fitting iterations.
- Select adsorption models and run fitting.
- Review fit status and logs.
Fitting tab: choose adsorption models, configure optimization, and inspect fitting logs.
- Build machine-learning-ready datasets.
- Configure and start new training experiments.
- Resume previous runs from checkpoints.
- Monitor run status and metrics from the dashboard.
Train Datasets view: pick a processed dataset and launch a training setup.
Checkpoints view: review saved checkpoints and resume previous experiments.
Training Dashboard view: track run progress and monitor key training metrics.
Run setup_and_maintenance.bat to access setup and maintenance actions:
- Remove logs: clears
.logfiles underapp/resources/logs. - Uninstall app: removes local runtimes and build artifacts while preserving folder scaffolding.
- Initialize database: creates or resets the project database schema.
- Clean desktop build artifacts: removes Tauri build output under release targets.
From app/client:
npm install
npm run dev
npm run buildFrontend API base path is controlled by VITE_API_BASE_URL (default: /api).
The application stores data and artifacts in specific directories:
- checkpoints: trained model weights, training history, and model configuration files.
- database: local SQLite database for metadata, cached responses, and experiment indexes.
- logs: application logs for troubleshooting and monitoring.
- runtimes: portable Python/uv/Node.js downloaded by the Windows launcher.
- runtime venv: backend virtual environment at
app/server/.venv. - runtime lockfile: backend lockfile at
app/server/uv.lock. - templates: starter assets such as the
.envscaffold.
Runtime/process values are loaded from settings/.env.
Database mode and backend defaults are loaded from settings/configurations.json.
.env runtime keys used by launcher/tests/frontend/Tauri startup:
| Variable | Description |
|---|---|
FASTAPI_HOST, FASTAPI_PORT |
Backend bind host and port. |
UI_HOST, UI_PORT |
Frontend host and port for local web mode and tests. |
KERAS_BACKEND, MPLBACKEND |
ML/scientific runtime backend configuration. |
RELOAD |
Uvicorn reload toggle for local development. |
OPTIONAL_DEPENDENCIES |
Enables optional test dependencies in launcher flow. |
VITE_API_BASE_URL |
Frontend API base path (same-origin /api by default). |
Single canonical runtime file:
settings/.env
This project is licensed under the MIT License. See LICENSE for full terms.
After completing the restructure and all rewiring, run the full validation process in one uninterrupted pass until everything is green.
Do not stop to ask for confirmation between steps. If a check fails, diagnose the issue, fix it, and rerun the relevant checks until the project passes. Only stop if there is a hard blocker that cannot be resolved from the repository contents, and in that case report the exact blocker, the failing command, the error output, and the files likely involved.
Validation must include, where applicable:
- Run and verify all root-level
.batlauncher scripts. - Run and verify all root-level
.batmaintenance scripts. - Confirm every
.batscript still works from the repository root after the restructure. - Confirm runtime discovery and setup works with
runtimes/. - Run
uv syncfromapp/server/. - Confirm
.venvanduv.lockare created or used only insideapp/server/. - Run backend linting, formatting checks, type checks, and tests if present.
- Run the FastAPI backend startup path and confirm imports, settings loading, resources loading, and path constants work.
- Generate or validate
app/shared/openapi.jsonfrom the backend. - Run frontend install, linting, type checks, tests, and production build if present.
- Confirm frontend API generation uses
app/shared/openapi.jsonwhere applicable. - Run all test suites under
app/tests/. - Validate Python imports after the move.
- Validate frontend imports after the move.
- Validate all script references under
app/scripts/. - Validate documentation and README references to the new layout.
- Validate Docker builds or compose files if present.
- Validate Tauri configuration if present.
- Run Tauri development checks if present.
- Run Tauri packaging or release build if present, and confirm output still lands in
release/or the configured release output location. - Confirm
.github/remains unchanged. - Confirm
assets/andrelease/contents remain unchanged. - Confirm
settings/andapp/resources/were extracted correctly and are read from their new locations. - Confirm there are no stale references to the old root-level backend, client, tests, settings, or resources paths.
At the end, provide a concise validation report listing:
- Every command run.
- Whether it passed or failed.
- Any fixes made after failures.
- Any remaining warnings or limitations.
- Final green status once all required checks pass.




