Skip to content

Latest commit

 

History

History
83 lines (45 loc) · 12.1 KB

File metadata and controls

83 lines (45 loc) · 12.1 KB

CodeQL CDS Extractor autobuild Re-write Guide

Goals

The primary goals of this project are to create a more robust, well-tested, and maintainable CodeQL extractor for .cds files that implement Core Data Services (CDS) as part of the [Cloud Application Programming] (CAP) model.

Overview

This document provides a guide for the multi-step process of re-writing the CodeQL extractor for CDS by using an approach based on autobuild rather than index-files.

This document is meant to be a common reference and a project guide while the iterative re-write is in-progress, especially since there is more to this project than a simple re-write of the scripts that comprise CodeQL's extractor (tool) for CDS.

Challenges with the Current Extractor (using index-files)

The current extractor for CDS is based on index-files, which has several limitations and challenges:

  1. Testability

    The current extractor is difficult to test, and especially difficult to troubleshoot when tests fail, because the current implementation lacks unit tests and relieas heavily on integration tests that are performed in a post-commit workflow that runs via GitHub Actions, which makes it more difficult to track errors back to the source of the problem and adds significant delay to the development process.

  2. Performance

    The current extractor is slow and inefficient, especially when dealing with large projects or complex CDS files. This is due to the way index-files processes files, which can lead to long processing times and increased resource usage. There are several performance improvements that could be made to the extractor, but they are all related to avoid work that we either do not need to do or that has already been done.

    • As one example of a performance problem, using the index-files approach means that we are provided with a list of all .cds files in the project and are expected to index them all, which makes sense for CodeQL (as we want our database to have a copy of every in-scope source code file) but is horribly inefficient from a CDS perspective as the CDS format allows for a single file to contain multiple CDS definitions. The extractor is expected to be able to handle this by parsing the declarative syntax of the .cds file in order to understand which other .cds files are to be imported as part of that top-level file, meaning that we are expected to avoid duplicate imports of files that are already (and only) used as library-style imports in top-level (project-level) CDS files. This is a non-trivial task, and the current extractor does not even try to parse the contents of the .cds files to determine which files are actually used in the project. Instead, it simply imports all .cds files that are found in the project, which can lead to duplicate imports and increased processing times.

    • Another example of a performance problem is that the current index-files-based extractor spends a lot of time installing node dependencies because it runs a npm install command in every "CDS project directory" that it finds, which is every directory that contains a package.json file and either directly contains a .cds file (as a sibling of the package.json file) or contains some subdirectory that contains either a .cds file or a subdirectory that contains a .cds file. This means that the extractor will install these dependencies in a directory that we would rather not make changes in just to be able to use a specific version of @sap/cds and/or @sap/cds-dk (the dependencies that are needed to run the extractor). This also means that if we have five project that all use the same version of @sap/cds and/or @sap/cds-dk, we will install that version five separate times in five separate locations, which is both a waste of time and creates a cleanup challenge as the install makes changes to the package-lock.json file in each of those five project directories (and also makes changes to the node_modules subdirectory of each project directory).

  3. Modularity

    The current extractor is mostly just one giant script, aka index-files.js, which is surrounded by a collection of small wrapper scripts (aka index-files.sh and index-files.cmd) that are used to allow the JavaScript code to be run in different environments (i.e. Windows and Unix-like environments). While we cannot really get away from the wrapper scripts. we should refactor the "one giant script" (in a single index-files.js file) into a more modular design that allows us to break the extractor into smaller, more manageable pieces.

  4. Maintainability

    The current implementation is lacking in terms of mandating consistent code style and best practices. For example, there are no linting rules applied or any scripts for applying consistent code style. This makes it difficult to maintain the code at a consistent level of quality, where it would be much better to have basic linting applied as a pre-commit task (i.e. to be performed in the developer's IDE). The current implementation also lacks documentation, which makes it difficult for new developers to understand how the extractor works and how to contribute to it.

Goals for the Future Extractor (using autobuild)

The main goals for the autobuild-based CDS extractor are to:

  1. Improve the Performance of Running the CDS Extractor on Large Codebases: The performance problems with the current index-files-based CDS extractor are compounded when running the extractor on large codebases, where the duplicate import problem is magnified in large projects that make heavy use of library-style imports. The autobuild-based extractor will be able to avoid this problem by using a more efficient approach to parsing the .cds files and determining which files are actually used in the project. This will allow us to avoid duplicate imports and reduce processing times.

  2. Improve the Testability of the CDS Extractor: The autobuild-based extractor will be designed to be more testable, with a focus on unit tests and integration tests that can be run in a pre-commit workflow. This will allow us to catch errors early in the development process and make it easier to maintain the code over time. The new extractor will also be designed to be more modular, with a focus on breaking the code into smaller, more manageable pieces that can be tested independently.

All other goals are secondary to and/or in support of the above two goals.

Expected Technical Changes

  • The autobuild.ts script/code will need to be able to determine its own list of .cds files to process when given a "source root" directory to be scanned (recursively) for .cds files and will have to maintain some form of state while determining the most efficient way to process all of the applicable CDS statements without duplicating work. This will be done by using a combination of parsing the .cds files and using a cache to keep track of which files have already been processed. The cache will be stored in a JSON file that will be created and updated as the extractor runs. This will allow the extractor to avoid re-processing files that have already been processed, which will improve performance and reduce resource usage.

  • Keep track of the unique set of @sap/cds and @sap/cds-dk dependency combinations that are used by any "project directory" found under the "source root" directory. Also, create a temporary directory structure for storing the package.json, package-lock.json, and node_modules subdirectory for each unique combination of @sap/cds and @sap/cds-dk dependencies. This will allow us to avoid installing the same version of these dependencies multiple times in different project directories, which will improve performance and reduce resource usage. The temporary directory structure will be created in a subdirectory of the "source root" directory, and will be cleaned up after the extractor has finished running. This will allow us to be much more efficient in terms of installing CDS compiler dependencies, much more explicit about which version of the CDS compiler we are using for a given (sub-)project, will allow us to avoid making changes to the package.json and package-lock.json files in the project directories, and will allow us to avoid installing the same version of these dependencies multiple times in different project directories.

  • Use a new autobuild.ts script as the main entry point for the extractor's TypeScript code, meaning that the build process will compile the TypeScript code in autobuild.ts to JavaScript code in autobuild.js, which will then be run as the main entry point for the extractor. Instead of index-files.cmd and index-files.sh, we will have wrapper scripts such as autobuild.cmd and autobuild.sh that will be used to run the autobuild.js script in different environments (i.e. Windows and Unix-like environments).

  • The new autobuild.ts script will be a kept as minimal as possible, with object-oriented code patterns used to encapsulate the functionality of the extractor in .ts files stored in a new src directory (project path would be extractors/cds/tools/src). This will allow us to break the extractor into smaller, more manageable pieces, and will also make it easier to test and maintain the code over time. The new src directory will contain all of the TypeScript code for the extractor, and will be organized into subdirectories based on functionality. For example, we might have a parsers subdirectory for parsing code, a utils subdirectory for utility functions, and so on. This will allow us to keep the code organized and easy to navigate.

  • Use TypeScript as the primary language for the extractor, rather than JavaScript. This will allow us to take advantage of TypeScript's type system and other features that make it easier to write and maintain code. Ultimately, we will still be using JavaScript when running the extractor, but we will use TypeScript to develop the extractor and then compile it to JavaScript for use in the CodeQL extractor. This will allow us to take advantage of TypeScript's type system and other features that make it easier to write, test, and maintain code. This will also allow us to use TypeScript's type system to catch errors at compile time rather than runtime, which will make the extractor more robust and easier to maintain.

  • Add unit tests for everything that can be unit tested. This will allow us to catch errors early in the development process and make it easier to maintain the code over time. We will use a combination of testing frameworks to test the extractor as part of the pre-commit build process. This will allow us to catch errors early in the development process and make it easier to maintain the code over time. Setting up such unit tests will require modifications to the package.json file to include the necessary dependencies and scripts for running the tests. We will also need to set up a testing framework, such as Jest or Mocha, to run the tests and report the results. To support all of this, we will create unit tests under a new test directory (project path would be extractors/cds/tools/test) that will contain all of the unit tests for the extractor. This will allow us to keep the tests organized and easy to navigate. The test directory will be organized into subdirectories based on functionality and mirroring the structure of the src directory. For example, if we add a src/parsers/cdsParser.ts file, we will also add a test/parsers/cdsParser.test.ts file that contains the unit tests for the cdsParser.ts file. This will allow us to keep the tests organized and easy to navigate.

Examples of Improved CDS Parsing

TODO

Example 1: Parsing an index.cds CDS File with Multiple Definitions

Example 2: Parsing a schema.cds CDS File with Multiple Definitions

References