llama.cpp/scripts
KASR b0c71c7b6d
scripts : platform independent script to verify sha256 checksums (#1203)
* python script to verify the checksum of the llama models

Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.

* Update README.md

update to the readme for improved readability and to explain the usage of the python checksum verification script

* update the verification script

I've extended the script based on suggestions by @prusnak

The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.

* minor improvment

small change so that the available ram is checked and not the total ram

* remove the part of the code that reads the file at once if enough ram is available

based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.

* Update verify-checksum-models.py

quick fix to pass the git check
2023-05-03 18:31:28 +03:00
..
build-info.cmake fix build-info.h for git submodules (#1289) 2023-05-03 02:43:43 +02:00
build-info.h.in fix build-info.h for git submodules (#1289) 2023-05-03 02:43:43 +02:00
build-info.sh Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
sync-ggml.sh scripts : add helper scripts to synch ggml repo 2023-04-23 19:57:09 +03:00
verify-checksum-models.py scripts : platform independent script to verify sha256 checksums (#1203) 2023-05-03 18:31:28 +03:00