feat: implement 800-character document chunking and stabilize Japanese search build
- Implement 800-char text chunking in add_item_text.
- Fix Lindera/BLAS dependency issues on Windows by using pure Rust dummy tokenizer.
- Add unit tests for chunking logic in mcp.rs.
- Update .gitignore to include journals.
- Add implementation journal 20260219-0004.
1 parent 6f920d9 commit 0cec53db096e335744dc0242b917205c72783838
@楽曲作りまくりおじさん 楽曲作りまくりおじさん authored 5 days ago
Showing 16 changed files
View
.gitignore
View
RELEASE_v0.2.5.md 0 → 100644
View
journals/20260218-0001-ジャーナル整理.md 0 → 100644
View
journals/20260218-0002-仕様書リフレッシュ.md 0 → 100644
View
journals/20260218-0003-仕様書解説充実化.md 0 → 100644
View
journals/20260219-0001-情報収集用フォルダ作成.md 0 → 100644
View
journals/20260219-0002-gemini-rag移植.md 0 → 100644
View
journals/20260219-0003-日本語LSA検索実装.md 0 → 100644
View
journals/20260219-0004-チャンク分割実装.md 0 → 100644
View
journals/メモ.md 0 → 100644
View
package-lock.json
View
src-tauri/Cargo.lock
View
src-tauri/Cargo.toml
View
src-tauri/src/mcp.rs
View
src-tauri/src/utils/lsa.rs
View
src-tauri/src/utils/tokenizer.rs