Skip to content

Script Development / Function Execution Process

This article mainly introduces the specific task process when a function is called.

1. Basic Process

In general, when calling a function in DataFlux Func, the execution flow is as follows:

  1. User initiates an HTTP request that reaches the Server
  2. The Server generates a function execution task and pushes it into the Redis queue
  3. Worker pops out the function execution task from the Redis queue and executes it

For scheduled tasks, there is no step of "user initiating an HTTP request"; instead, the task is directly generated by the Beat service, and the execution flow is as follows:

  1. Beat periodically generates function execution tasks and pushes them into the Redis queue
  2. Worker pops out the function execution tasks from the Redis queue and executes them

2. Standalone Deployment

For standalone deployment of DataFlux Func, the entire process is very straightforward:

flowchart TB
    USER[User]
    SERVER[Server]
    WORKER[Worker]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SERVER
    SERVER --Function Execution Task Enqueued--> REDIS_QUEUE
    REDIS_QUEUE --Function Execution Task Dequeued--> WORKER

    Beat --"Function Execution Task Enqueued\n(Scheduled)"--> REDIS_QUEUE

2. Multi-replica Deployment

For multi-replica deployment of DataFlux Func, since there is an SLB (or other reverse proxy service), any Server might receive the request.

Additionally, because each replica connects to the same Redis, each task will only be fetched and executed by one Worker:

flowchart TB
    USER[User]
    SERVER_1[Server 1]
    SERVER_2[Server 2]
    WORKER_1[Worker 1]
    WORKER_2[Worker 2]
    REDIS_QUEUE[Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forwarding--> SERVER_1
    SLB -.-> SERVER_2
    SERVER_1 --Function Execution Task Enqueued--> REDIS_QUEUE
    SERVER_2 -.-> REDIS_QUEUE
    REDIS_QUEUE -.-> WORKER_1
    REDIS_QUEUE --Function Execution Task Dequeued--> WORKER_2

    Beat --"Function Execution Task Enqueued\n(Scheduled)"--> REDIS_QUEUE

3. Fully Independent Active-Standby Deployment

In certain situations, if you need to implement a "fully independent active-standby deployment," you can further split the Redis and set the weight ratio of the primary and standby nodes in the SLB (or other reverse proxy server) to 100:0.

At this point, since the primary and standby nodes are completely independent and each runs its own separate instance of DataFlux Func, starting both the primary and standby node versions of DataFlux Func simultaneously would lead to duplicate executions of scheduled tasks. Therefore, you may choose to keep the backup node's DataFlux Func turned off during normal operation or write custom logic within your scripts to prevent duplicate task executions.

flowchart TB
    USER[User]
    MAIN_NODE_SERVER[Main Node Server]
    MAIN_NODE_WORKER[Main Node Worker]
    MAIN_NODE_BEAT[Main Node Beat]
    MAIN_NODE_REDIS_QUEUE[Main Node Redis Queue]
    BACKUP_NODE_SERVER[Backup Node Server]
    BACKUP_NODE_WORKER[Backup Node Worker]
    BACKUP_NODE_BEAT[Backup Node Beat]
    BACKUP_NODE_REDIS_QUEUE[Backup Node Redis Queue]

    USER --HTTP Request--> SLB
    SLB --HTTP Forwarding--> MAIN_NODE_SERVER
    SLB -.-> BACKUP_NODE_SERVER

    subgraph Backup Node
        direction TB
        BACKUP_NODE_SERVER --Function Execution Task Enqueued--> BACKUP_NODE_REDIS_QUEUE
        BACKUP_NODE_REDIS_QUEUE --Function Execution Task Dequeued--> BACKUP_NODE_WORKER

        BACKUP_NODE_BEAT --"Function Execution Task Enqueued\n(Scheduled)"--> BACKUP_NODE_REDIS_QUEUE
    end

    subgraph Main Node
        direction TB
        MAIN_NODE_SERVER --Function Execution Task Enqueued--> MAIN_NODE_REDIS_QUEUE
        MAIN_NODE_REDIS_QUEUE --Function Execution Task Dequeued--> MAIN_NODE_WORKER

        MAIN_NODE_BEAT --"Function Execution Task Enqueued\n(Scheduled)"--> MAIN_NODE_REDIS_QUEUE
    end