Skip to main content

Create Original Challenges – Coding Style

※ Creating original coding s is a paid option. If you would like this service, please contact Customer Success.

Track Test CS_Seiya Tajima avatar
Written by Track Test CS_Seiya Tajima
Updated over 2 weeks ago

Table of Contents

◾️ Steps to Create Original Challenges (Coding Style)

  1. Install Marine.

  2. Use Marine to implement your coding Question.

  3. Submit the generated project to the Track operations team.

◾️ What Is Marine?

Marine is a tool that lets you create Track content and preview its behavior in real time.

  • For details, see the here

    ◾️ Overview

    Marine is a real-time preview tool for Track content.

    While authoring, you can view and run the content exactly as it will appear on Track—tests, blanks, and all interactive elements behave identically.

    ◾️ Basic Usage

    1. At the top of the screen, open the first select box. • All directories under contents/ that contain either track.yml or book.yml are listed.

    2. Choose a directory. • The lower workspace refreshes and displays that content.

    3. Edit any file in the directory, then click Reload to apply changes to the workspace.

    Workspace behavior matches Track, so you can run tests, fill blanks, and verify interactions.

    ◾️ Header Layout

UI Element

Purpose

Contents select box

Lists directories that include track.yml or book.yml • Switch workspace content

Language select box (CLI / Function challenges only)

Choose the programming language for CLI / Function challenges

solution select box (Challenges only)

Select which sample solution to display

validate button (Challenge / Book)

Check whether the content can be registered on Track

◾️ Menu Options

Option

Description

reload content list

Rescan contents/ • Click after adding new directories with track.yml or book.yml

Download

Create a ZIP of the entire content (including track.yml / book.yml) • Use when uploading via TCM

DevMode

Toggle developer-mode view • See each content’s developer guide for details

Orca Information

Switch the execution environment • Useful when testing new language versions

Selected Language

Switch between English / Japanese in README files and test cases

◾️ Writing Content

Marine supports authoring these content types:

  • Challenge

  • Quiz

  • Book

    ◾️ Overview

    Track Books are designed so that almost everything can be authored in Markdown.

    This document describes every file and option used to build a Book.

    ◾️ A Minimal Book

    The smallest possible Book contains just two files:

    contents/ └─ my-book/    ├─ book.yml    └─ chapter1.md

    book.yml

    title: Introduction to CODEPREP detail: A sample Book for CODEPREP. chapters:   - chapter1.md

    chapter1.md

    # First Chapter Sample chapter definition.  ## First Section Sample section definition. Fill the blank so the HTML prints “Hello World”.  ### main(index.html)

    <p>Hello ${World}</p>

    When viewed in Marine you will see <p>Hello [ ]</p>.

    ◾️ book.yml

    book.yml defines the whole Book.

    Required keys

    title – Book title

    detail – Short description (use | for multi-line)

    chapters – List of chapter definition files (order = chapter order)

    Optional keys

    cover, files, images, download, playground, webStorage, keepSession,

    timeInMinutes, env, shared, carryOver, initialize, sandbox, answerSection

    Title & Detail

    detail: |   Multi-line description   written with the pipe syntax.

    Chapters

    List the Markdown files that define each chapter.

    File names are arbitrary; only the path matters.

    Common Optional Keys (excerpt)

    • files Files that appear as tabs in every section

    images Images referenced only in explanations

    download / playgroundtrue (default) or false

    webStorage Enable sessionStorage / localStorage

    keepSession Keep a remote shell alive between commands

    env Docker image and caches; see env.md

    answerSectionnone, each, or last (default each)

    carryOver Files whose content is carried to later sections

    initialize Files & commands executed once when a remote session starts

    sandbox Global sandbox files and options

    ◾️ Chapter Definition File <a name="chapter"></a>

    Headings

    # = chapter title (must be unique, first line)

    ## = section title (multiple allowed)

    ### = sub-section: main, hint, preview, etc.

    Comments

    Lines starting with // are ignored during compilation (still visible on GitHub).

    HTML comments (<!-- -->) are not ignored.

    Section Structure & Types

    A section may contain one of these main blocks: • main Code-fill blanks

    exercise Client / Remote / CLI tasks

    playground Free-edit area

    sandbox Persistent free-edit area

    mcq Multiple-choice question

    fib Fill-in-blank inside Markdown

    preview Read-only explanation

    main, exercise, playground, fib, mcq, sandbox are mutually exclusive.

    Main Block Example

    ### main(index.html)

    <${p}>Hello World<${\\/p}>

    ${answer} rules

    • Plain string → exact match

    • Starts with / → RegExp

    • Starts with { → JSON rule

    • Starts with \\\\ → escape the first char

    Use - prefix: under the ### main line to replace $ with another symbol.

    Hint / Tips

    ### hint …hidden hint…  ### tips …alternative hint…

    Answer

    Use ### answer for explanations; they can be aggregated later.

    Files

    ### files - main.css - [main.js](files/app.js)

    MCQ

    Markdown list with - [ ] items; correct answers - [x].

    FIB

    Markdown containing ${blank}. Supports dropdowns:

    ### dropdown - byte   - 1 byte   - 1 kilobyte

    Preview & Remote Execution

    If remote is defined, a console appears.

    If not, Track auto-chooses:

    1. Defined preview

    2. index.html in main

    3. index.html in files

    4. Otherwise show main as plain text.

    remote supports command, build, prepare, cwd, after, previewFile, mode: html | console, and special token @main.


    ◾️ Advanced Functions <a name="advanced"></a>

    Reusing Previous Sections

    Insert the completed code from a prior section:

    ${func:section(prev)}

    prev / numeric index / alias are supported.

    Split by blank lines or markers:

    ${func:section-before-blank(prev)} …new code… ${func:section-after-blank(prev)}

    func:mark

    Embed a marker to target insertion points:

    // ${func:mark("next")}

    Then reference:

    ${func:section-after(prev,"next")}

    External File Helpers

    func:file(...), file-before-blank, etc. mirror the section helpers.


    ◾️ Exercise / Playground / Sandbox <a name="exercise"></a>

    Client Exercise

    ### exercise - mode: client - file: [main.js](assets/1-2.main.js) - test: assets/1-2.test.js

    test must output TAP (e.g., Mocha + -R tap).

    Remote Exercise

    ### exercise - mode: remote - build: javac Main.java - file: [Main.java](assets/Main.java) - hidden: [Tests.java](assets/Tests.java) - command: java Tests

    CLI Exercise

    Similar to a coding challenge; mode: cli.

    Importing a Challenge

    book.yml

    shared:   - exercise1: ../../challenges/sample

    chapter.md

    - mode: coding - reference: exercise1

    Partial Edit Regions

    Wrap editable code with:

    // EXERCISE_BEGIN_EDIT …user edits here… // EXERCISE_END_EDIT

    Playground

    ### playground - mode: client - file: index.html - file: main.js

    Sandbox

    Global files in book.yml, per-section display with:

    ### sandbox - show: index.html

    ◾️ Debug / Env / DevMode <a name="debug"></a>

    Debug

    Define like challenges:

    ### debug - command: cat $f | $c - raw:[Case 1]a b c - file:[Case 2]test/in/sample.txt

    Per-Section env

    ### env - imageName: java - cacheDirs: /root/.m2

    DevMode (Marine)

    • Saves user edits between reloads

    • Enforces solving order (with skip)

    • Enables carry-over testing

    Use “Clean up user code” to reset.

  • AI

    ◾️ Overview

    An AI Challenge covers programming problems that are not suitable for Track’s online editor because they

    • handle large-scale datasets, or

    • require extensive computing resources.

    Typical use cases involve machine-learning tasks.

    ◾️ What Candidates Can Do in an AI Challenge

    1. Read the problem statement (README.md) in the browser.

    2. Download the data archive required to solve the problem. • File sizes can range from several hundred MB to multiple GB.

    3. Upload an answer file in the format specified by the author (e.g., JSON or CSV). • Each upload triggers automatic grading.

    By default, Track shows the raw score after grading.

    If hideRawScore: true is set in track.yml, Track instead shows a rank (S / A / B / C / D / E).

    ◾️ What Authors Must Prepare

    1. Write the problem statement in README.md.

    2. Package all dataset files into a .tar.gz or .zip, then host the archive at any publicly downloadable URL.

    3. Build a judge application that • reads the candidate’s upload, • evaluates it, and • prints a numeric score to STDOUT. If the judge itself is large (e.g., needs extra data), you may also host it as a .tar.gz at a public URL.

    Unlike coding challenges—whose entire content lives in Git—AI Challenges keep large files in external storage.

    During import, Track copies these files into private storage, so candidates never access the public URLs directly.

    ◾️ Judge Application Specification

    • Runs inside Track’s Docker execution environment.

    • May be written in any language supported by Track.

    • Obtain the uploaded filename via the UPLOAD_FILENAME environment variable.

    • Output only one numeric score to STDOUT; that value becomes the candidate’s score.

    • On invalid input, write a clear error message to STDERR; the message is shown to the candidate as-is.

    ◾️ scoreType

    Different tasks use different metrics—accuracy, recall, RMSE, etc.

    Define how Track should rank scores with scoreType in track.yml.

Key

Required

Description

rankBy

Yes

high (larger is better) / low (smaller is better) / zero (absolute value closer to 0 is better)

max

No

Upper bound of possible scores

min

No

Lower bound of possible scores

scoreType:   rankBy: high   # higher score = better   max: 1   min: 0

◾️ rankedRange

Map raw scores to ranks S-E.

rankedRange:   S: 0.80..   A: 0.75..0.80   B: 0.65..0.75   C: 0.50..0.65   D: 0.40..0.50   E: ..0.40

The first matching range takes precedence if ranges overlap.

◾️ track.yml

Key

Type

Required

Purpose

type

String

Yes

Must be AI

readonly

Array\<String\>

Public URL(s) of dataset archive(s) (.tar.gz / .zip)

hide

Array\<String\>

Files to bundle but hide from candidates (e.g., judge source). Accepts file paths or public URLs

images

Array\<String\>

Image files referenced from README.md (wildcards allowed)

test

String

Yes

Command to run the judge application

build

Array\<String\>

Build commands if the judge needs compilation

envConf

Hash

Yes

Docker image and settings (see env.md)

scoreType

Hash

Scoring rules (see above)

hideRawScore

Boolean

If true, show rank instead of raw score

Unknown keys are ignored.

Example:

type: AI test: node test.js images:   - images/nue.jpg readonly:   - <https://example.com/ai-challenge/data.tar.gz> hide:   - <https://example.com/ai-challenge/judge.tar.gz> scoreType:   rankBy: high   min: 0   max: 1 envConf:   imageName: givery/track-base2

◾️ README.md

Write the full problem statement in Markdown.

Official challenges follow a template; custom challenges may use any structure.

Markdown extensions are available—see the extension guide.

README.md itself is not listed in track.yml.

◾️ Internationalization

During compilation, Track selects the README file based on the language option (ja / en):

Language

Preferred File

Fallback

ja

README_ja.md

README.md

en

README_en.md

README.md

Provide both files for bilingual challenges.

Inside the judge application, read the CHALLENGE_LANGUAGE environment variable to localize error messages.

  1. App

    ◾️ Overview

    An Application Challenge lets candidates build and submit a full Web application and have it executed and evaluated on Track.

    Typical frameworks include but are not limited to:

    • Ruby on Rails (Ruby)

    • Express (Node.js)

    • Spring (Java)

    ◾️ Candidate Workflow

    1. The problem statement appears on the left-hand side.

    2. The candidate downloads a ZIP starter kit, unpacks it locally, and builds the required Web application. • Their local environment may need tools such as Java, Node.js, or Docker.

    3. When finished, they compress the entire project into a ZIP file and upload it.

    4. Track starts the Web application inside its environment and runs the author-supplied tests, allowing the candidate to verify: • whether the app runs on Track, and • whether it passes all tests.

    ◾️ Screen Layout

    The interface has four areas:

    1. Left Pane – Problem statement and a toggle to show the file tree of the uploaded ZIP.

    2. Right-Top Pane – Displays the file selected in the tree, with diffs against the starter kit or last upload.

    3. Right-Bottom Pane – Three tabs: • Status – Current runtime state of the app. • Test Cases – Results of the latest test run (may take several minutes). • Log – Logs from the last app startup.

    4. Footer ButtonsStart / Stop – Launch or halt the app using the latest ZIP. • Upload – Upload a ZIP file. • Download – Download the starter kit.

    ◾️ Using Marine to Create an Application Challenge

    Certain authoring features call AWS APIs and therefore require AWS credentials.

    Add the following environment variables to your docker-compose.yml when running Marine:

    environment:   AWS_ACCESS_KEY_ID: xxxx   AWS_SECRET_ACCESS_KEY: xxxx

    Without these keys you can still:

    • validate application.yml,

    • preview README.md, and

    • view the starter kit / solution in Marine.

    You cannot:

    • register the challenge on Track,

    • run tests,

    • upload ZIPs, or

    • start / stop the app in the Track environment.

    ◾️ Application Challenge Creation Workflow

    1. Create application.yml.

    2. Write README.md.

    3. Prepare the starter kit inside the app directory.

    4. Create a sample solution in the solution directory.

    5. Write startup and test scripts in the hidden directory.

    6. If test output is not already TAP, add a converter in the conversion directory.


    ◾️ Creating application.yml

    application.yml is specific to Application Challenges.

Key

Description

type

Fixed to web (optional)

test

Command executed for RUN (testing). Required. Usually sh run.sh.

images

Images referenced from README.md (wildcards allowed)

infra

ECS task size (optional)<br>• cpu: multiples of 256 ( default 256 )<br>• memory: multiples of 512 ( default 512 )

conversion

Convert test output to TAP (optional)<br>• command: one or more commands<br>• envConf: Docker image etc. (same format as track.yml)

testCount

Number of test cases (required or optional depending on compiler options)

required

File names that must exist in the uploaded ZIP

services

Docker Compose–compatible service definitions (required)<br>• Valid keys: image, user, ports, environment, healthcheck, depends_on, working_dir, command<br>• If only one service exists, it is the Main Service. With multiple services, one named app or web becomes the Main Service.<br>• The Main Service’s command is mandatory (used for UP startup).<br>• If multiple services exist, the Main Service must expose an environment variable whose value is 127.0.0.1 (see “HOST_NAME” below).<br>• Prefer ECR image names to avoid Docker Hub rate limits.

◾️ Writing README.md

Write the full assignment in Markdown.

Official content follows a template; custom content may use any format.

Markdown extensions are available.

◾️ Preparing the Starter Files (app Directory)

  • Place the entire starter project here.

  • Any .gitignore inside app is respected when generating the downloadable ZIP.

  • You may include docker-compose.yml to simplify setup; see “HOST_NAME” and “Docker Image Names” below.

◾️ Providing a Sample Solution (solution Directory)

Include only the files that differ from app.

For solution preview, Track overlays solution on top of app before startup.

◾️ Authoring Startup / Test Scripts (hidden Directory)

hidden is not distributed to candidates.

Typical files:

  • up.sh – start the Web server.

  • run.sh – execute the test suite.

Each script must begin by downloading and unpacking the candidate ZIP via $SUBMISSION_URL:

wget -q -O temp.zip "$SUBMISSION_URL" unzip -o temp.zip rm temp.zip  echo "Succeed to extract zip"  # Start server or run tests below

You may also place additional private files (e.g., test data) here.

◾️ Converting Test Output to TAP (conversion Directory)

If your test runner cannot output TAP directly, place a converter script here and reference it via application.yml#conversion.

Otherwise, omit this directory.

◾️ Additional Topics

solution.md

If you provide an explanatory write-up, add it as solution.md.

Marine and the web UI do not display this file.

Docker Image Names

  • In application.yml, prefer ECR images to avoid Docker Hub’s 100 pulls / 6 h limit.

  • In docker-compose.yml (inside app), use Docker Hub images because candidates cannot access ECR.

HOST_NAME

Fargate does not support Compose-style service aliases (db:5432).

Instead, containers must connect via 127.0.0.1:<port>.

Add an environment variable in the Main Service whose value is 127.0.0.1, and instruct candidates to reference that variable both locally and on Track.

◾️ Internationalization

During compilation, Track selects README files as follows:

Language Option

Used File

Fallback

ja

README_ja.md

README.md

en

README_en.md

README.md

The same rule applies to solution.md.

Other files are not internationalized.

◾️ Detailed Workflow: Build & Upload with Marine

A complete guide to creating coding challenges with Marine is available here:

  • How to Build a Track Challenge

    ■ Overview

    A Challenge asks users to write code and validates the result with unit tests.

    Four categories exist:

    • CLI (cli) language-agnostic CLI apps

    • Function (function) language-agnostic single functions

    • Coding (coding) language-specific implementations

    • Frontend (frontend) browser-side development tasks

    ■ Challenge Types

Type

Core Idea

CLI

Read parameters via CLI args or STDIN, print to STDOUT. Any language permitted.

Function

Implement a predefined function. Args / return types: string, int, float, boolean, and their 1-D / 2-D arrays.

Coding

Fill in parts of starter code in a specific language.

Frontend

Build a front-end app with a selectable framework / template.

■ Official vs. Custom Challenges

Official Challenges are provided by Track and, depending on your plan, are available to all organizations.

Custom Challenges are created by an organization and are visible only to that organization.

Official Challenges must include at least one full-score solution to guarantee quality; Custom Challenges only need a compilable, non-zero solution.

■ Testing Framework

Track relies on TAP output.

• CLI / Function: write tests in JavaScript with mocha -R tap (helper library codecheck available).

• Coding: use any TAP-capable framework included in your Docker image.

ok n … = pass  not ok n … = fail

■ track.yml Keys

Mandatory

type, test, testcases

Common optional keys

editable, readonly, hide, testExclude, images, allowNewFile, build, initialize, mainFile, languages, templateType, envConf, evaluationPoint, solutions, namedSolutions, shared, debug, watch, function, functionInputDir, systemTag

Example

type: coding allowNewFile: true test: mocha editable:   - answer.md readonly:   - test/base.js   - test/cases.json hide:   - test/secret* testExclude:   - answer.md languages: all solutions:   - solution.js testcases:   open: 8   secret: 8 evaluationPoint:   File IO:    Check file operations   Error Handling:  Check exception handling

Write the problem statement in Markdown.

Track supports several Markdown extensions (see the extension guide).

Do not list README.md in track.yml.

■ Open Tests vs. Secret Tests

  • Files under readonly → open tests (visible to users)

• Files under hide   → secret tests (hidden)

testcases.open / testcases.secret must match the actual counts; mismatches raise warnings.

Template shown to users for free-text comments after coding.

If absent, a blank page is displayed.

Author-facing explanation of the model solution.

Visible only to administrators, never to candidates.

■ Internationalization

On compilation Track chooses files as follows:

jaREADME_ja.md else README.md

enREADME_en.md else README.md

solution.md follows the same rule.

You may add track_en.yml / track_ja.yml; top-level keys in these files override the main track.yml.

At runtime the environment variable CHALLENGE_LANGUAGE contains ja or en—use it to localize error messages.

■ BEGIN_CHALLENGE / END_CHALLENGE

In any file listed in editable or readonly, code lines between

BEGIN_CHALLENGE and END_CHALLENGE are removed before delivery to candidates (but remain visible in Marine Dev Mode).

■ solutions

Register model answers.

Syntax: destName:sourcePath (dest = filename seen during testing).

Any solution.* file in the root folder is auto-registered.

CLI / Function: one solution.ext per language (plus additional files such as package.json if needed).

Coding: all listed files overwrite candidates’ versions during solution verification.

■ namedSolutions

Define multiple solution sets with names.

Frontend Challenges use the template name as name.

namedSolutions:   - name: react-ts     files:       - src/App.tsx:solution/react-ts/src/App.tsx       - index.html:solution/react-ts/index.html

■ Docker Images

See env.md for all available images.

Coding Challenges must set envConf.

CLI / Function templates automatically select the correct image.

■ CLI Templates

Supported languages: c, cpp, csharp, java, nodejs, php, python, python3, scala, perl, ruby, go, swift, kotlin, rust.

Provide a Docker image and template to add more.

■ Function Definition (Function Challenges)

Define in track.yml:

function:   name: myFunction   arguments:     - name: a       type: string     - name: b       type: int     - name: c       type: string[]   result:     type: string[]

Supported types: int, string, float, boolean plus 1-D / 2-D arrays.

Strings cannot contain whitespace; int range ±9 007 199 254 740 991.

■ Writing Tests for CLI Challenges

Use codecheck with mocha:

const codecheck = require("codecheck"); const app = codecheck.consoleApp(process.env.APP_COMMAND);  codecheck   .readLines("test/in/input.txt")   .then(app.run)   .then(result => {     result.stdout.should.eql("expected output");   });

APP_COMMAND contains the user’s executable.

■ templateType

CLI templates:

default (Args) | stdin (STDIN) | plain (no input)

If omitted, default is used.

■ Writing Tests for Function Challenges

Define input JSON per test case under functionInputDir; Track converts it to a plain text file:

input1.in.json

{   "a": "foo",   "b": 1,   "c": ["bar", "baz"] }

input1.in

foo 1 2 bar baz

Use codecheck or custom scripts to pass the values to the function and verify the return value.


■ initialize

Run one-time setup when the editor opens.

initialize:   files:     - test/judge.cpp   commands:     - g++ -std=c++11 -o test/judge test/judge.cpp     - rm test/judge.cpp   showConsole: true

■ evaluationPoint

Group tests by evaluation criteria.

evaluationPoint:   FileIO:      Test file operations   ErrorHandle: Test error processing

Prefix a TAP test title with [FileIO] etc. to assign it.


■ shared

Reuse files across multiple challenges.

readonly:   - sql/init.sql shared:   - sql:../shared/sql

alias example:

shared:   - sharedTest:../shared/test:test   - sharedJS:../shared/js:_

_ places files in the root directory.


■ debug

Provide quick test commands for candidates.

debug:   command: cat $f | $c   raw:[Case 1]a b c   file:[Case 2]test/in/sample.txt

Placeholders

$f = input file $* = args list $c = user command

codecheck-based challenges generate debug entries automatically.


■ systemTag

Flags that modify Track behavior:

alwaysSave Evaluate on every local run (needed for server-side deploy tasks).

incompleteSolution Allow official challenges without a full-score solution.

simpleBuild Skip heavy build steps (e.g., no npm install).


(End of document)

After creating your challenge according to the guide, send the entire project to the Track Customer Success team.


◾️ Supported Programming Languages

See the list of languages and runtime environments Track supports here.


◾️ How to Create Other Challenge Styles

Did this answer your question?