# Submission flow This article will describe in detail execution flow of submission from the point of submission into web application to the point of evaluation of results from execution. Only hot path is considered in following description. ## Web Application First thing user has to submit his/hers solution to web application which provides interface to upload multiple files and then submit them. More detailed description follows: - user upload file by file into prepared submit form - after uploading all files connected to assignment user click on submit button - Web Application send request to Web API that user wants to evaluated assignment with provided files ## Web API After user submit solution then web application has to hand over all needed information about submission to broker. Submit endpoint is extended with support of "cross-submitting" which means that user with high privileges (supervisor, administrator) can submit solution for user with lower privileges (student). More detailed description follows: - suitable assignment and user are obtained from database - instance license, deadlines and limit of numbers of submissions are checked - runtime configuration is automatically detected if not provided as post parameter - submission as database entity is created and persisted - job configuration is obtained and loaded from selected runtime configuration - configuration is extended of proper fileserver address and identification - submitted files from user and job configuration are uploaded to fileserver using http post request - several information about submission are sent back to user - alongside other information address of websocket which can be used for watching evaluation progress is sent ## Broker Broker gets information about new submission from web application. At this point broker has to find suitable worker for execution of this particular submission. When worker is found and is jobless, then broker send detailed submission to worker to evaluation. More detailed description follows: - broker gets multipart "eval" message from web application with job identification, source archive URL, result URL and appropriate worker headers - headers are parsed and worker which matches all of them is chosen as the one which will execute incoming submission - whole execution request is saved into `worker` structure to waiting queue - if chosen worker is not working right now then incoming request is forwarded directly from waiting queue to worker through multipart message - if worker queue is not empty then nothing is done right now - execution of this particular request is suspended until worker complete all previous requests ## Worker Worker gets request from broker to evaluate particular submission. Next step is to evaluate given submission and upload results to fileserver. After this worker only send broker that submission was evaluated. More detailed description follows: - "listening" thread gets multipart message from `broker` with command "eval" - "listening" thread hand over whole message through `inproc` socket to "execution" thread - "execution" thread now has to prepare all things and get ready for execution - temporary folders names are initated (but not created) this includes folder with source files, folder with downloaded submission, temporary directory for all possible types of files and folder which will contain results from execution - if some of the above stated folders is already existing, then it is deleted - after successfull initiation submission archive is downloaded to created folder - submission archive is decompressed into submission files folder - all files from decompressed archive are copied into evaluation directory which can be used for execution in sandboxes - all other folders which were not created are created just now - it is time to build `job` from configuration - job configuration file is located in evaluation directory if exists and is loaded using `yaml-cpp` library - loaded configuration is now parsed into `job_metadata` structure which is handed over to `job` execution class itself - `job` execution class will now initialize and construct particular `tasks` from `job_metadata` into task tree - if there is some item which can use variables (e.g. binary path, cmd arguments, bound directories) it is done at this point - all tasks from configuration are created and divided into external or internal tasks - external tasks have to load limits from configuration for this workers hwgroup which was loaded from worker configuration - if limits were not defined default worker limits are loaded - internal tasks are just created without any further processing like external tasks - after successfull creation of all tasks, they are connected into graph structure using dependencies - next and last step of building `job` structure is to execute topological sorting and get queue of tasks which will be executed in order - topological sorting take into account priority of tasks and sequential order from job configuration - running all tasks in order follows - after that results have to be obtained from all executed tasks and given into corresponding yaml file - result yaml file alongside with content of result folder is sent back to fileserver in compressed form - of course there has to be cleaning after whole evaluation which will deinitialize all needed variables and also delete all used temporary folders - all of previous was in "execution" thread which now have to tell "listening" thread that execution is done - this is done through multipart message "done" with packed job identification addressed to "listening" thread - action of "listening" is now pretty straightforward "done" message is resent to `broker` ## Broker Broker gets done message from worker and basically only mark submission as done in its internal structures. After that broker has to tell Web API that execution of particular job ended. More detailed description follows: - broker gets "done" message from worker after successfull execution of job - appropriate `worker` structure is found based on its identification - some checks of invariants (current job identification, right amount of arguments) are executed - job results arrived with status "OK" - frontend is notified that job ended successfully - deletion of current execution request in `worker` structure follows and appropriate worker is now considered free - if worker execution queue is not empty than next waiting request is taken and given as current one - after that only missing thing is to send that request to worker and loop back to worker execution - if worker queue is empty then appropriate worker remains free and waiting for another execution request - job results arrived with status "INTERNAL_ERROR" - current request is retrieved and deleted from current worker - request can be reassigned then it is assigned to another worker - request was assigned too many times - frontend is notified about it and request is thrown away - job results arrived with status "FAILED" - frontend is notified through REST API call that job with particular identification failed - request is canceled and if there is some other waiting it is assigned to worker ## Web API Web API is notified about job status through simple http calls originated from broker. After that API is deciding if evaluated submission will be evaluated immediatelly or on demand. More detailed description follows: - job ended with status "OK" - if submission was evaluation of reference solution - results of evaluation are loaded from fileserver and unzipped - actual time and memmory consumption and other suitable results are retrieved - solution evaluation database entity is created and persisted - if evaluation of results failed report to administrator is sent - if submission was normal student evaluation - if user submitted solution by him/herself then nothing is done here - administrator or supervisor submitted exercise as particular user - results of submission are processed as described further in "On demand loading" chapter - job ended with status "FAILED" - job failure is saved into database alongside other reported errors - email is prepared and sent to ReCodEx administrator ### On demand loading Realised as calling of evaluate API endpoint. Results of submission are loaded from fileserver and then processed. More detailed description follows: - results of submission are retrived from fileserver - with job config used for submission results are loaded into internal API structures - typed results are used to compute overall score of solution - above detected score with any other suitable information are stored into database as solution evaluation entity ## Web Application Web Application has only a simple work to do. If results is obtained on demand then proper API call is executed and results are obtained and shown to user. More detailed description follows: - if results of submission are not prepared yet evaluate API endpoint is called - after this results should be present and can be shown to user