feat: implement 800-character document chunking and stabilize Japanese search build
- Implement 800-char text chunking in add_item_text.
- Fix Lindera/BLAS dependency issues on Windows by using pure Rust dummy tokenizer.
- Add unit tests for chunking logic in mcp.rs.
- Update .gitignore to include journals.
- Add implementation journal 20260219-0004.
1 parent 9c250cd commit 8c476504979b98e1c2233cb8c3535cac76acc11d
@楽曲作りまくりおじさん 楽曲作りまくりおじさん authored 6 days ago
Showing 16 changed files
View
.gitignore
View
RELEASE_v0.2.5.md 0 → 100644
View
journals/20260218-0001-ジャーナル整理.md 0 → 100644
View
journals/20260218-0002-仕様書リフレッシュ.md 0 → 100644
View
journals/20260218-0003-仕様書解説充実化.md 0 → 100644
View
journals/20260219-0001-情報収集用フォルダ作成.md 0 → 100644
View
journals/20260219-0002-gemini-rag移植.md 0 → 100644
View
journals/20260219-0003-日本語LSA検索実装.md 0 → 100644
View
journals/20260219-0004-チャンク分割実装.md 0 → 100644
View
journals/メモ.md 0 → 100644
View
package-lock.json
View
src-tauri/Cargo.lock
View
src-tauri/Cargo.toml
View
src-tauri/src/mcp.rs
View
src-tauri/src/utils/lsa.rs
View
src-tauri/src/utils/tokenizer.rs