Skip to content

Script Development / Function Execution Process

This Guide mainly introduces the specific task process when a function is called.

1. Basic Process

In general, when calling a function in DataFlux Func, the specific execution flow is as follows:

  1. The user initiates an HTTP request to the Server
  2. The Server generates a function execution task and pushes it to the Redis queue
  3. The Worker pops the function execution task from the Redis queue and executes it

For Cron Jobs, there is no "user initiates an HTTP request" step. The task is directly generated by the Beat service, and the specific execution flow is as follows:

  1. The Beat periodically generates a function execution task and pushes it to the Redis queue
  2. The Worker pops the function execution task from the Redis queue and executes it

2. Single Machine Deployment

For a single machine deployment of DataFlux Func, the entire process is very simple:

flowchart TB
    USER[User]
    SERVER[Server]
    WORKER[Worker]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SERVER
    SERVER --Function Execution Task Enqueue--> REDIS_QUEUE
    REDIS_QUEUE --Function Execution Task Dequeue--> WORKER

    Beat --"Function Execution Task Enqueue\n(Periodic)"--> REDIS_QUEUE

2. Multi-Replica Deployment

For a multi-replica deployment of DataFlux Func, since there is an SLB (or other reverse proxy service), any Server might receive the request.

At the same time, since each replica connects to the same Redis, each task will only be fetched and executed by any one Worker:

flowchart TB
    USER[User]
    SERVER_1[Server 1]
    SERVER_2[Server 2]
    WORKER_1[Worker 1]
    WORKER_2[Worker 2]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forward--> SERVER_1
    SLB -.-> SERVER_2
    SERVER_1 --Function Execution Task Enqueue--> REDIS_QUEUE
    SERVER_2 -.-> REDIS_QUEUE
    REDIS_QUEUE -.-> WORKER_1
    REDIS_QUEUE --Function Execution Task Dequeue--> WORKER_2

    Beat --"Function Execution Task Enqueue\n(Periodic)"--> REDIS_QUEUE

3. Fully Independent Primary-Backup Deployment

In some cases, if a "fully independent primary-backup deployment" is needed, the Redis can be further split, and the weight ratio of the primary and backup nodes can be set to 100:0 in the SLB (or other reverse proxy server).

At this time, since the primary and backup nodes are completely independent, each is a fully independent running DataFlux Func. If both the primary and backup nodes' DataFlux Func are started simultaneously, it will cause Cron Job duplication. Therefore, you can usually shut down the backup node's DataFlux Func, or write your own processing in the Script to avoid task duplication.

flowchart TB
    USER[User]
    MAIN_NODE_SERVER[Primary Node Server]
    MAIN_NODE_WORKER[Primary Node Worker]
    MAIN_NODE_BEAT[Primary Node Beat]
    MAIN_NODE_REDIS_QUEUE[Primary Node Redis Queue]
    BACKUP_NODE_SERVER[Backup Node Server]
    BACKUP_NODE_WORKER[Backup Node Worker]
    BACKUP_NODE_BEAT[Backup Node Beat]
    BACKUP_NODE_REDIS_QUEUE[Backup Node Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forward--> MAIN_NODE_SERVER
    SLB -.-> BACKUP_NODE_SERVER

    subgraph Backup Node
        direction TB
        BACKUP_NODE_SERVER --Function Execution Task Enqueue--> BACKUP_NODE_REDIS_QUEUE
        BACKUP_NODE_REDIS_QUEUE --Function Execution Task Dequeue--> BACKUP_NODE_WORKER

        BACKUP_NODE_BEAT --"Function Execution Task Enqueue\n(Periodic)"--> BACKUP_NODE_REDIS_QUEUE
    end

    subgraph Primary Node
        direction TB
        MAIN_NODE_SERVER --Function Execution Task Enqueue--> MAIN_NODE_REDIS_QUEUE
        MAIN_NODE_REDIS_QUEUE --Function Execution Task Dequeue--> MAIN_NODE_WORKER

        MAIN_NODE_BEAT --"Function Execution Task Enqueue\n(Periodic)"--> MAIN_NODE_REDIS_QUEUE
    end