Revisions of ollama

buildservice-autocommit accepted request 1174685 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 16)
baserev update by copy to link target
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1174682 from Eyad Issa's avatar Eyad Issa (VaiTon) (revision 15)
- Update to version 0.1.38:
  * New model: Falcon 2: A new 11B parameters causal decoder-only
    model built by TII and trained over 5T tokens.
  * New model: Yi 1.5: A new high-performing version of Yi, now 
    licensed as Apache 2.0. Available in 6B, 9B and 34B sizes.
  * Added ollama ps command
  * Added /clear command
  * Fixed issue where switching loaded models on Windows would take
    several seconds
  * Running /save will no longer abort the chat session if an
    incorrect name is provided
  * The /api/tags API endpoint will now correctly return an empty
    list [] instead of null if no models are provided
buildservice-autocommit accepted request 1173543 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 14)
baserev update by copy to link target
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1173521 from Eyad Issa's avatar Eyad Issa (VaiTon) (revision 13)
- Update to version 0.1.37:
  * Fixed issue where models with uppercase characters in the name
    would not show with ollama list
  * Fixed usage string for ollama create
  * Fix finish_reason being "" instead of null in the Open-AI
    compatible chat API.

- Use obs_scm service instead of the deprecated tar_scm
- Use zstd for vendor tarball compression
buildservice-autocommit accepted request 1173462 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 12)
baserev update by copy to link target
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1173461 from Eyad Issa's avatar Eyad Issa (VaiTon) (revision 11)
- Update to version 0.1.36:
- Update to version 0.1.35:
- Update to version 0.1.34:
buildservice-autocommit accepted request 1169871 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 10)
baserev update by copy to link target
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1169791 from Richard Rahl's avatar Richard Rahl (rrahl0) (revision 9)
- Update to version 0.1.32:
  * scale graph based on gpu count
  * Support unicode characters in model path (#3681)
  * darwin: no partial offloading if required memory greater than system
  * update llama.cpp submodule to `7593639` (#3665)
  * fix padding in decode
  * Revert "cmd: provide feedback if OLLAMA_MODELS is set on non-serve command (#3470)" (#3662)
  * Added Solar example at README.md (#3610)
  * Update langchainjs.md (#2030)
  * Added MindsDB information (#3595)
  * examples: add more Go examples using the API (#3599)
  * Update modelfile.md
  * Add llama2 / torch models for `ollama create` (#3607)
  * Terminate subprocess if receiving `SIGINT` or `SIGTERM` signals while model is loading (#3653)
  * app: gracefully shut down `ollama serve` on windows (#3641)
  * types/model: add path helpers (#3619)
  * update llama.cpp submodule to `4bd0f93` (#3627)
  * types/model: make ParseName variants less confusing (#3617)
  * types/model: remove (*Digest).Scan and Digest.Value (#3605)
  * Fix rocm deps with new subprocess paths
  * mixtral mem
  * Revert "types/model: remove (*Digest).Scan and Digest.Value (#3589)"
  * types/model: remove (*Digest).Scan and Digest.Value (#3589)
  * types/model: remove DisplayLong (#3587)
  * types/model: remove MarshalText/UnmarshalText from Digest (#3586)
  * types/model: init with Name and Digest types (#3541)
  * server: provide helpful workaround hint when stalling on pull (#3584)
  * partial offloading
  * refactor tensor query
  * api: start adding documentation to package api (#2878)
buildservice-autocommit accepted request 1168439 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 8)
baserev update by copy to link target
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1168020 from Bernhard Wiedemann's avatar Bernhard Wiedemann (bmwiedemann) (revision 7)
Update to version 0.1.31:
  * Backport MacOS SDK fix from main
  * Apply 01-cache.diff
  * fix: workflows
  * stub stub
  * mangle arch
  * only generate on changes to llm subdirectory
  * only generate cuda/rocm when changes to llm detected
  * Detect arrow keys on windows (#3363)
  * add license in file header for vendored llama.cpp code (#3351)
  * remove need for `$VSINSTALLDIR` since build will fail if `ninja` cannot be found (#3350)
  * change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
  * malformed markdown link (#3358)
  * Switch runner for final release job
  * Use Rocky Linux Vault to get GCC 10.2 installed
  * Revert "Switch arm cuda base image to centos 7"
  * Switch arm cuda base image to centos 7
  * Bump llama.cpp to b2527
  * Fix ROCm link in `development.md`
  * adds ooo to community integrations (#1623)
  * Add cliobot to ollama supported list (#1873)
  * Add Dify.AI to community integrations (#1944)
  * enh: add ollero.nvim to community applications (#1905)
  * Add typechat-cli to Terminal apps (#2428)
  * add new Web & Desktop link in readme for alpaca webui (#2881)
  * Add LibreChat to Web & Desktop Apps (#2918)
  * Add Community Integration: OllamaGUI (#2927)
  * Add Community Integration: OpenAOE (#2946)
  * Add Saddle (#3178)
  * tlm added to README.md terminal section. (#3274)
...
Loren Burkholder's avatar Loren Burkholder (LorenDB) committed (revision 6)
- Update to version 0.1.28:
  * Fix embeddings load model behavior (#2848)
  * Add Community Integration: NextChat (#2780)
  * prepend image tags (#2789)
  * fix: print usedMemory size right (#2827)
  * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
  * Add ollama user to video group
  * Add env var so podman will map cuda GPUs
  * Omit build date from gzip headers
  * Log unexpected server errors checking for update
  * Refine container image build script
  * Bump llama.cpp to b2276
  * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354)
  * Update types.go (#2744)
  * Update langchain python tutorial (#2737)
  * no extra disk space for windows installation (#2739)
  * clean up go.mod
  * remove format/openssh.go
  * Add Community Integration: Chatbox
  * better directory cleanup in `ollama.iss`
  * restore windows build flags and compression
Ana Guerrero's avatar Ana Guerrero (anag+factory) accepted request 1152310 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 5)
initialized devel package after accepting 1152310
Loren Burkholder's avatar Loren Burkholder (LorenDB) accepted request 1152042 from Jan Engelhardt's avatar Jan Engelhardt (jengelh) (revision 4)
factory review.

- Edit description, answer _what_ the package is and use nominal
  phrase. (https://en.opensuse.org/openSUSE:Package_description_guidelines)
Loren Burkholder's avatar Loren Burkholder (LorenDB) committed (revision 3)
Remove the shadow dependency as it is not needed
Loren Burkholder's avatar Loren Burkholder (LorenDB) committed (revision 2)
Apply some suggested changes to the user configuration
Guillaume GARDET's avatar Guillaume GARDET (Guillaume_G) accepted request 1150495 from Loren Burkholder's avatar Loren Burkholder (LorenDB) (revision 1)
I've created a package for Ollama (https://ollama.com) so that users don't have to use an install script. I will point out that this does not have CUDA support or ROCm enabled; we won't be able to package CUDA for obvious reasons, and ROCm is currently not packaged in Factory. However, for basic CPU-enabled use, this is better than curling a random script from the interwebs :)
Displaying all 16 revisions
openSUSE Build Service is sponsored by