Skip to content

Script Development / Function Execution Process

This article primarily introduces the specific task process when a function is invoked.

1. Basic Process

In general, when invoking a function in DataFlux Func, the specific execution flow is as follows:

  1. The user initiates an HTTP request that reaches the Server
  2. The Server generates a function execution task and pushes it to the Redis queue
  3. The Worker pops the function execution task from the Redis queue and executes it

For tasks scheduled for execution, there is no step of "the user initiating an HTTP request," as the task is directly generated by the Beat service. The specific execution flow is as follows:

  1. Beat periodically generates function execution tasks and pushes them to the Redis queue
  2. The Worker pops the function execution task from the Redis queue and executes it

2. Single-Machine Deployment

For a single-machine deployment of DataFlux Func, the entire process is very simple:

flowchart TB
    USER[User]
    SERVER[Server]
    WORKER[Worker]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SERVER
    SERVER --Function Execution Task Enqueue--> REDIS_QUEUE
    REDIS_QUEUE --Function Execution Task Dequeue--> WORKER

    Beat --"Function Execution Task Enqueue\n(Scheduled)"--> REDIS_QUEUE

2. Multi-Replica Deployment

For multi-replica deployments of DataFlux Func, since SLB (or other reverse proxy services) exists, any Server may receive the request.

At the same time, because each replica connects to the same Redis, each task will only be retrieved and executed by one arbitrary Worker:

flowchart TB
    USER[User]
    SERVER_1[Server 1]
    SERVER_2[Server 2]
    WORKER_1[Worker 1]
    WORKER_2[Worker 2]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forwarding--> SERVER_1
    SLB -.-> SERVER_2
    SERVER_1 --Function Execution Task Enqueue--> REDIS_QUEUE
    SERVER_2 -.-> REDIS_QUEUE
    REDIS_QUEUE -.-> WORKER_1
    REDIS_QUEUE --Function Execution Task Dequeue--> WORKER_2

    Beat --"Function Execution Task Enqueue\n(Scheduled)"--> REDIS_QUEUE

3. Fully Independent Master/Backup Deployment

In some cases, if you need to implement "fully independent master/backup deployment," then you can further split Redis and set the weight ratio of the master and backup nodes in SLB (or other reverse proxy servers) to 100:0.

At this point, since the master and backup nodes are completely independent, each runs as a fully independent DataFlux Func. If DataFlux Func is started simultaneously on both the master and backup nodes, scheduled tasks will be executed repeatedly. Therefore, the DataFlux Func on the backup node can be turned off during normal times, or scripting can be used within the script to avoid repeated task execution.

flowchart TB
    USER[User]
    MAIN_NODE_SERVER[Master Node Server]
    MAIN_NODE_WORKER[Master Node Worker]
    MAIN_NODE_BEAT[Master Node Beat]
    MAIN_NODE_REDIS_QUEUE[Master Node Redis Queue]
    BACKUP_NODE_SERVER[Backup Node Server]
    BACKUP_NODE_WORKER[Backup Node Worker]
    BACKUP_NODE_BEAT[Backup Node Beat]
    BACKUP_NODE_REDIS_QUEUE[Backup Node Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forwarding--> MAIN_NODE_SERVER
    SLB -.-> BACKUP_NODE_SERVER

    subgraph Backup Node
        direction TB
        BACKUP_NODE_SERVER --Function Execution Task Enqueue--> BACKUP_NODE_REDIS_QUEUE
        BACKUP_NODE_REDIS_QUEUE --Function Execution Task Dequeue--> BACKUP_NODE_WORKER

        BACKUP_NODE_BEAT --"Function Execution Task Enqueue\n(Scheduled)"--> BACKUP_NODE_REDIS_QUEUE
    end

    subgraph Master Node
        direction TB
        MAIN_NODE_SERVER --Function Execution Task Enqueue--> MAIN_NODE_REDIS_QUEUE
        MAIN_NODE_REDIS_QUEUE --Function Execution Task Dequeue--> MAIN_NODE_WORKER

        MAIN_NODE_BEAT --"Function Execution Task Enqueue\n(Scheduled)"--> MAIN_NODE_REDIS_QUEUE
    end