Skip to content
Snippets Groups Projects
Unverified Commit 164442c0 authored by writinwaters's avatar writinwaters Committed by GitHub
Browse files

0329 readme (#175)

* Updated Get Started.

* Finishing off Get Started

* minor

* minor
parent 204385a7
No related branches found
No related tags found
No related merge requests found
...@@ -45,22 +45,6 @@ ...@@ -45,22 +45,6 @@
- The part of document cited in the answer is presented in the most expressive way. - The part of document cited in the answer is presented in the most expressive way.
- For PDF file, the cited parts in document can be located in the original PDF. - For PDF file, the cited parts in document can be located in the original PDF.
## 🤺RagFlow vs. other RAG applications
| Feature | RagFlow | Langchain-Chatchat | Assistants API | QAnythig | LangChain |
|---------|:---------:|:-----------:|:-----------:|:-----------:|:-----------:|
| **Well-Founded Answer** | :white_check_mark: | :x: | :x: | :x: | :x: |
| **Trackable Chunking** | :white_check_mark: | :x: | :x: | :x: | :x: |
| **Chunking Method** | Rich Variety | Naive | Naive | Naive | Naive |
| **Table Structure Recognition** | :white_check_mark: | :x: | :x: | :x: | :x: |
| **Structured Data Lookup** | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
| **Programming Approach** | API-oriented | API-oriented | API-oriented | API-oriented | Python Code-oriented |
| **RAG Engine** | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |
| **Prompt IDE** | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |
| **Supported LLMs** | Rich Variety | Rich Variety | OpenAI-only | QwenLLM | Rich Variety |
| **Local Deployment** | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: |
| **Ecosystem Strategy** | Open Source | Open Source | Close Source | Open Source | Open Source |
## 🔎 System Architecture ## 🔎 System Architecture
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
...@@ -82,13 +66,14 @@ ...@@ -82,13 +66,14 @@
> $ sysctl vm.max_map_count > $ sysctl vm.max_map_count
> ``` > ```
> >
> Reset `vm.max_map_count` to a value greater than 65535 if it is not. In this case, we set it to 262144: > Reset `vm.max_map_count` to a value greater than 65535 if it is not.
> >
> ```bash > ```bash
> # In this case, we set it to 262144:
> $ sudo sysctl -w vm.max_map_count=262144 > $ sudo sysctl -w vm.max_map_count=262144
> ``` > ```
> >
> This change will be reset after a system reboot. To ensure your change remains permanent, add or update the following line in **/etc/sysctl.conf** accordingly: > This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
> >
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
...@@ -98,7 +83,7 @@ ...@@ -98,7 +83,7 @@
### Start up the RagFlow server ### Start up the RagFlow server
1. Clone the repo 1. Clone the repo:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
...@@ -106,36 +91,33 @@ ...@@ -106,36 +91,33 @@
2. 2. **Recommended**: In **docker/service_conf.yaml**, select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with your own.
> - In [service_conf.yaml](./docker/service_conf.yaml), configuration of *LLM* in **user_default_llm** is strongly recommended. > - You can still continue with the default settings, but it is highly recommended that you use your own API key the next time you log into the system.
> In **user_default_llm** of [service_conf.yaml](./docker/service_conf.yaml), you need to specify LLM factory and your own _API_KEY_. > - RagFlow now supports the flowing LLM factories: OpenAI, Tongyi-Qianwen, ZHIPU-AI, and Moonshot.
> If you do not have _API_KEY_ at the moment, you can specify it in
Settings the next time you log in to the system.
> - RagFlow supports the flowing LLM factory, with more coming in the pipeline:
> [OpenAI](https://platform.openai.com/login?launch), [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
> [ZHIPU-AI](https://open.bigmodel.cn/), [Moonshot](https://platform.moonshot.cn/docs/docs)
```bash
$ cd ragflow/docker 3. You now presented with two options for building the system: Using the pre-built images or building the images from source:
$ docker compose up -d
```
### OR
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git # To use the pre-built images:
$ cd ragflow/ $ cd ragflow/docker
$ docker build -t infiniflow/ragflow:v1.0 . $ docker compose up -d
$ cd ragflow/docker ```
$ docker compose up -d ```bash
``` # To build the images from source:
> The core image is about 15 GB in size and may take a while to load. $ cd ragflow/
$ docker build -t infiniflow/ragflow:v1.0 .
$ cd ragflow/docker
$ docker compose up -d
```
Check the server status after pulling all images and having Docker up and running: > The core image is about 15 GB in size and may take a while to load.
```bash
$ docker logs -f ragflow-server 4. Check the server status after pulling all images and having Docker up and running:
``` ```bash
*The following output confirms the successful launch of the system:* $ docker logs -f ragflow-server
```
*The following output confirms a successful launch of the system:*
```bash ```bash
____ ______ __ ____ ______ __
...@@ -149,10 +131,9 @@ $ docker logs -f ragflow-server ...@@ -149,10 +131,9 @@ $ docker logs -f ragflow-server
* Running on http://127.0.0.1:9380 * Running on http://127.0.0.1:9380
* Running on http://172.22.0.5:9380 * Running on http://172.22.0.5:9380
INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:Press CTRL+C to quit
``` ```
In your browser, enter the IP address of your server.
5. In your browser, enter the IP address of your server and now you can try it out.
## 🔧 Configurations ## 🔧 Configurations
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment