69. CobiGen

69.1. Configuration

CobiGen will be configured using a configuration folder containing a context configuration, multiple template folders with a templates configuration per template folder, and a number of templates in each template folder. Find some examples here. Thus, a simple folder structure might look like this:

CobiGen_Templates
 |- templateFolder1
    |- templates.xml
 |- templateFolder2
    |- templates.xml
 |- context.xml

69.1.1. Context Configuration

The context configuration (context.xml) always has the following root structure:

Listing 97. Context Configuration
<?xml version="1.0" encoding="UTF-8"?>
<contextConfiguration xmlns="http://capgemini.com"
                      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                      version="1.0">
    <triggers>
        ...
    </triggers>
</contextConfiguration>

The context configuration has a version attribute, which should match the XSD version the context configuration is an instance of. It should not state the version of the currently released version of CobiGen. This attribute should be maintained by the context configuration developers. If configured correctly, it will provide a better feedback for the user and thus higher user experience. Currently there is only the version v1.0. For further version there will be a changelog later on.

Trigger Node

As children of the <triggers> node you can define different triggers. By defining a <trigger> you declare a mapping between special inputs and a templateFolder, which contains all templates, which are worth to be generated with the given input.

Listing 98. trigger configuration
<trigger id="..." type="..." templateFolder="..." inputCharset="UTF-8" >
    ...
</trigger>
  • The attribute id should be unique within an context configuration. It is necessary for efficient internal processing.

  • The attribute type declares a specific trigger interpreter, which might be provided by additional plug-ins. A trigger interpreter has to provide an input reader, which reads specific inputs and creates a template object model out of it to be processed by the FreeMarker template engine later on. Have a look at the plug-in’s documentation of your interest and see, which trigger types and thus inputs are currently supported.

  • The attribute templateFolder declares the relative path to the template folder, which will be used if the trigger gets activated.

  • The attribute inputCharset (optional) determines the charset to be used for reading any input file.

Matcher Node

A trigger will be activated if its matchers hold the following formula:

!(NOT || …​ || NOT) && AND && …​ && AND && (OR || …​ || OR)

Whereas NOT/AND/OR describes the accumulationType of a matcher (see below) and e.g. NOT means 'a matcher with accumulationType NOT matches a given input'. Thus additionally to an input reader, a trigger interpreter has to define at least one set of matchers, which are satisfiable, to be fully functional. A <matcher> node declares a specific characteristics a valid input should have.

Listing 99. Matcher Configuration
<matcher type="..." value="..." accumulationType="...">
    ...
</matcher>
  • The attribute type declares a specific type of matcher, which has to be provided by the surrounding trigger interpreter. Have a look at the plug-in’s documentation, which also provides the used trigger type for more information about valid matcher and their functionalities.

  • The attribute value might contain any information necessary for processing the matcher’s functionality. Have a look at the relevant plug-in’s documentation for more detail.

  • The attribute accumulationType (optional) specifies how the matcher will influence the trigger activation. Valid values are:

    • OR (default): if any matcher of accumulation type OR matches, the trigger will be activated as long as there are no further matchers with different accumulation types

    • AND: if any matcher with AND accumulation type does not match, the trigger will not be activated

    • NOT: if any matcher with NOT accumulation type matches, the trigger will not be activated

VariableAssignment Node

Finally, a <matcher> node can have multiple <variableAssignment> nodes as children. Variable assignments allow to parametrize the generation by additional values, which will be added to the object model for template processing. The variables declared using variable assignments, will be made accessible in the templates.xml as well in the object model for template processing via the namespace variables.*.

Listing 100. Complete Configuration Pattern
<?xml version="1.0" encoding="UTF-8"?>
<contextConfiguration xmlns="http://capgemini.com"
                      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                      version="1.0">
    <triggers>
        <trigger id="..." type="..." templateFolder="...">
            <matcher type="..." value="...">
                <variableAssignment type="..." key="..." value="..." />
            </matcher>
        </trigger>
    </triggers>
</contextConfiguration>
  • The attribute type declares the type of variable assignment to be processed by the trigger interpreter providing plug-in. This attribute enables variable assignments with different dynamic value resolutions.

  • The attribute key declares the namespace under which the resolved value will be accessible later on.

  • The attribute value might declare a constant value to be assigned or any hint for value resolution done by the trigger interpreter providing plug-in. For instance, if type is regex, then on value you will assign the matched group number by the regex (1, 2, 3…​)

ContainerMatcher Node

The <containerMatcher> node is an additional matcher for matching containers of multiple input objects. Such a container might be a package, which encloses multiple types or---more generic---a model, which encloses multiple elements. A container matcher can be declared side by side with other matchers:

Listing 101. ContainerMatcher Declaration
<?xml version="1.0" encoding="UTF-8"?>
<contextConfiguration xmlns="http://capgemini.com"
                      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                      version="1.0">
    <triggers>
        <trigger id="..." type="..." templateFolder="..." >
            <containerMatcher type="..." value="..." retrieveObjectsRecursively="..." />
            <matcher type="..." value="...">
                <variableAssignment type="..." variable="..." value="..." />
            </matcher>
        </trigger>
    </triggers>
</contextConfiguration>
  • The attribute type declares a specific type of matcher, which has to be provided by the surrounding trigger interpreter. Have a look at the plug-in’s documentation, which also provides the used trigger type for more information about valid matcher and their functionalities.

  • The attribute value might contain any information necessary for processing the matcher’s functionality. Have a look at the relevant plug-in’s documentation for more detail.

  • The attribute retrieveObjectsRecursively (optional boolean) states, whether the children of the input should be retrieved recursively to find matching inputs for generation.

The semantics of a container matchers are the following:

  • A <containerMatcher> does not declare any <variableAssignment> nodes

  • A <containerMatcher> matches an input if and only if one of its enclosed elements satisfies a set of <matcher> nodes of the same <trigger>

  • Inputs, which match a <containerMatcher> will cause a generation for each enclosed element

69.1.2. Templates Configuration

The template configuration (templates.xml) specifies, which templates exist and under which circumstances it will be generated. There are two possible configuration styles:

  1. Configure the template meta-data for each template file by template nodes

  2. (since cobigen-core-v1.2.0): Configure templateScan nodes to automatically retrieve a default configuration for all files within a configured folder and possibly modify the automatically configured templates using templateExtension nodes

To get an intuition of the idea, the following will initially describe the first (more extensive) configuration style. Such an configuration root structure looks as follows:

Listing 102. Extensive Templates Configuration
<?xml version="1.0" encoding="UTF-8"?>
<templatesConfiguration xmlns="http://capgemini.com"
                        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                        version="1.0" templateEngine="FreeMarker">
    <templates>
            ...
    </templates>
    <increments>
            ...
    </increments>
</templatesConfiguration>

The root node <templatesConfiguration> specifies two attributes. The attribute version provides further usability support and will be handled analogous to the version attribute of the context configuration. The optional attribute templateEngine specifies the template engine to be used for processing the templates (since cobigen-core-4.0.0). By default it is set to FreeMarker. The node <templatesConfiguration> allows two different grouping nodes as children. First, there is the <templates> node, which groups all declarations of templates. Second, there is the <increments> node, which groups all declarations about increments.

Template Node

The <templates> node groups multiple <template> declarations, which enables further generation. Each template file should be registered at least once as a template to be considered.

Listing 103. Example Template Configuration
<templates>
    <template name="..." destinationPath="..." templateFile="..." mergeStrategy="..." targetCharset="..." />
    ...
</templates>

A template declaration consist of multiple information:

  • The attribute name specifies an unique ID within the templates configuration, which will later be reused in the increment definitions.

  • The attribute destinationPath specifies the destination path the template will be generated to. It is possible to use all variables defined by variable assignments within the path declaration using the FreeMarker syntax ${variables.*}. While resolving the variable expressions, each dot within the value will be automatically replaced by a slash. This behavior is accounted for by the transformations of Java packages to paths as CobiGen has first been developed in the context of the Java world. Furthermore, the destination path variable resolution provides the following additional built-in operators analogue to the FreeMarker syntax:

    • ?cap_first analogue to FreeMarker

    • ?uncap_first analogue to FreeMarker

    • ?lower_case analogue to FreeMarker

    • ?upper_case analogue to FreeMarker

    • ?replace(regex, replacement) - Replaces all occurrences of the regular expression regex in the variable’s value with the given replacement string. (since cobigen-core v1.1.0)

    • ?removeSuffix(suffix) - Removes the given suffix in the variable’s value iff the variable’s value ends with the given suffix. Otherwise nothing will happen. (since cobigen-core v1.1.0)

    • ?removePrefix(prefix) - Analogue to ?removeSuffix but removes the prefix of the variable’s value. (since cobigen-core v1.1.0)

  • The attribute templateFile describes the relative path dependent on the template folder specified in the trigger to the template file to be generated.

  • The attribute mergeStrategy (optional) can be optionally specified and declares the type of merge mechanism to be used, when the destinationPath points to an already existing file. CobiGen by itself just comes with a mergeStrategy override, which enforces file regeneration in total. Additional available merge strategies have to be obtained from the different plug-in’s documentations (see here for java, XML, properties, and text). Default: not set (means not mergeable)

  • The attribute targetCharset (optional) can be optionally specified and declares the encoding with which the contents will be written into the destination file. This also includes reading an existing file at the destination path for merging its contents with the newly generated ones. Default: UTF-8

(Since version 4.1.0) It is possible to reference external template (templates defined on another trigger), thanks to using <incrementRef …​> that are explained here.

TemplateScan Node

(since cobigen-core-v1.2.0)

The second configuration style for template meta-data is driven by initially scanning all available templates and automatically configure them with a default set of meta-data. A scanning configuration might look like this:

Listing 104. Example of Template-scan configuration
<?xml version="1.0" encoding="UTF-8"?>
<templatesConfiguration xmlns="http://capgemini.com"
                        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                        version="1.2">
    <templateScans>
        <templateScan templatePath="templates" templateNamePrefix="prefix_" destinationPath="src/main/java"/>
    </templateScans>
</templatesConfiguration>

You can specify multiple <templateScan …​> nodes for different templatePaths and different templateNamePrefixes.

  • The name can be specified to later on reference the templates found by a template-scan within an increment. (since cobigen-core-v2.1.)

  • The templatePath specifies the relative path from the templates.xml to the root folder from which the template scan should be performed.

  • The templateNamePrefix (optional) defines a common id prefix, which will be added to all found and automatically configured templates.

  • The destinationPath defines the root folder all found templates should be generated to, whereas the root folder will be a prefix for all found and automatically configured templates.

A templateScan will result in the following default configuration of templates. For each file found, new template will be created virtually with the following default values:

  • id: file name without .ftl extension prefixed by templateNamePrefix from template-scan

  • destinationPath: relative file path of the file found with the prefix defined by destinationPath from template-scan. Furthermore,

    • it is possible to use the syntax for accessing and modifying variables as described for the attribute destinationPath of the template node, besides the only difference, that due to file system restrictions you have to replace all ?-signs (for built-ins) with #-signs.

    • the files to be scanned, should provide their final file extension by the following file naming convention: <filename>.<extension>.ftl Thus the file extension .ftl will be removed after generation.

  • templateFile: relative path to the file found

  • mergeStrategy: (optional) not set means not mergeable

  • targetCharset: (optional) defaults to UTF-8

(Since version 4.1.0) It is possible to reference external templateScan (templateScans defined on another trigger), thanks to using <incrementRef …​> that are explained here.

TemplateExtension Node

(since cobigen-core-v1.2.0)

Additionally to the templateScan declaration it is easily possible to rewrite specific attributes for any scanned and automatically configured template.

Listing 105. Example Configuration of a TemplateExtension
<templates>
    <templateExtension ref="prefix_FooClass.java" mergeStrategy="javamerge" />
</templates>

<templateScans>
    <templateScan templatePath="foo" templateNamePrefix="prefix_" destinationPath="src/main/java/foo"/>
</templateScans>

Lets assume, that the above example declares a template-scan for the folder foo, which contains a file FooClass.java.ftl in any folder depth. Thus the template scan will automatically create a virtual template declaration with id=prefix_FooClass.java and further default configuration.

Using the templateExtension declaration above will reference the scanned template by the attribute ref and overrides the mergeStrategy of the automatically configured template by the value javamerge. Thus we are able to minimize the needed templates configuration.

(Since version 4.1.0) It is possible to reference external templateExtension (templateExtensions defined on another trigger), thanks to using <incrementRef …​> that are explained here.

Increment Node

The <increments> node groups multiple <increment> nodes, which can be seen as a collection of templates to be generated. An increment will be defined by a unique id and a human readable description.

<increments>
    <increment id="..." description="...">
        <incrementRef ref="..." />
        <templateRef ref="..." />
        <templateScanRef ref="..." />
    </increment>
</increments>

An increment might contain multiple increments and/or templates, which will be referenced using <incrementRef …​>, <templateRef …​>, resp. <templateScanRef …​> nodes. These nodes only declare the attribute ref, which will reference an increment, a template, or a template-scan by its id or name.

(Since version 4.1.0) An special case of <incrementRef …​> is the external incrementsRef. By default, <incrementRef …​> are used to reference increments defined in the same templates.xml file. So for example, we could have:

<increments>
    <increment id="incA" description="...">
        <incrementRef ref="incB" />
    </increment>
    <increment id="incB" description="...">
        <templateRef .... />
        <templateScan .... />
    </increment>
</increments>

However, if we want to reference an increment that it is not defined inside our templates.xml (an increment defined for another trigger), then we can use external incrementRef as shown below:

<increment name="..." description="...">
    <incrementRef ref="trigger_id::increment_id"/>
</increment>

The ref string is split using as delimiter ::. The first part of the string, is the trigger_id to reference. That trigger contains an increment_id. Currently, this functionality only works when both templates use the same kind of input file.

69.1.3. Java Template Logic

since cobigen-core-3.0.0 which is included in the Eclipse and Maven Plugin since version 2.0.0 In addition, it is possible to implement more complex template logic by custom Java code. To enable this feature, you can simply import the the CobiGen_Templates by clicking on Adapt Templates, turn it into a simple maven project (if it is not already) and implement any Java logic in the common maven layout (e.g. in the source folder src/main/java). Each Java class will be instantiated by CobiGen for each generation process. Thus, you can even store any state within a Java class instance during generation. However, there is currently no guarantee according to the template processing order.

As a consequence, you have to implement your Java classes with a public default (non-parameter) constructor to be used by any template. Methods of the implemented Java classes can be called within templates by the simple standard FreeMarker expression for calling Bean methods: SimpleType.methodName(param1). Until now, CobiGen will shadow multiple types with the same simple name non-deterministically. So please prevent yourself from that situation.

Finally, if you would like to do some reflection within your Java code accessing any type of the template project or any type referenced by the input, you should load classes by making use of the classloader of the util classes. CobiGen will take care of the correct classloader building including the classpath of the input source as well as of the classpath of the template project. If you use any other classloader or build it by your own, there will be no guarantee, that generation succeeds.

69.1.4. Template Properties

since cobigen-core-4.0.0 Using a configuration with template scan, you can make use of properties in templates specified in property files named cobigen.properties next to the templates. The property files are specified as Java property files. Property files can be nested in subfolders. Properties will be resolved including property shading. Properties defined nearest to the template to be generated will take precedence. In addition, a cobigen.properties file can be specified in the target folder root (in eclipse plugin, this is equal to the source project root). These properties take precedence over template properties specified in the template folder.

Note
It is not allowed to override context variables in cobigen.properties specifications as we have not found any interesting use case. This is most probably an error of the template designer, CobiGen will raise an error in this case.
Multi module support or template target path redirects

since cobigen-core-4.0.0 One special property you can specify in the template properties is the property relocate. It will cause the current folder and its subfolders to be relocated at destination path resolution time. Take the following example:

folder
  - sub1
    Template.java.ftl
    cobigen.properties

Let the cobigen.properties file contain the line relocate=../sub2/${cwd}. Given that, the relative destination path of Template.java.ftl will be resolved to folder/sub2/Template.java. Compare template scan configuration for more information about basic path resolution. The relocate property specifies a relative path from the location of the cobigen.properties. The ${cwd} placeholder will contain the remaining relative path from the cobigen.properties location to the template file. In this basic example it just contains Template.java.ftl, but it may even be any relative path including subfolders of sub1 and its templates. Given the relocate feature, you can even step out of the root path, which in general is the project/maven module the input is located in. This enables template designers to even address, e.g., maven modules located next to the module the input is coming from.

69.1.5. Basic Template Model

In addition to what is served by the different model builders of the different plug-ins, CobiGen provides a minimal model based on context variables as well as CobiGen properties. The following model is independent of the input format and will be served as a template model all the time:

69.1.6. Plugin Mechanism

Since cobigen-core 4.1.0, we changed the plug-in discovery mechanism. So far it was necessary to register new plugins programmatically, which introduces the need to let every tool integration, i.e. for eclipse or maven, be dependent on every plug-in, which should be released. This made release cycles take long time as all plug-ins have to be integrated into a final release of maven or eclipse integration.

Now, plug-ins are automatically discovered by the Java Service Loader mechanism from the classpath. This also effects the setup of eclipse and maven integrations to allow modular releases of CobiGen in future. We are now able to provide faster rollouts of bug-fixes in any of the plug-ins as they can be released completely independently.

69.2. Plug-ins

69.2.1. Java Plug-in

The CobiGen Java Plug-in comes with a new input reader for java artifacts, new java related trigger and matchers, as well as a merging mechanism for Java sources.

Trigger extension

The Java Plug-in provides a new trigger for Java related inputs. It accepts different representations as inputs (see Java input reader) and provides additional matching and variable assignment mechanisms. The configuration in the context.xml for this trigger looks like this:

  • type 'java'

    Listing 106. Example of a java trigger definition
    <trigger id="..." type="java" templateFolder="...">
        ...
    </trigger>

    This trigger type enables Java elements as inputs.

Matcher types

With the trigger you might define matchers, which restrict the input upon specific aspects:

  • type 'fqn' → full qualified name matching

    Listing 107. Example of a java trigger definition with a full qualified name matcher
    <trigger id="..." type="java" templateFolder="...">
        <matcher type="fqn" value="(.+)\.persistence\.([^\.]+)\.entity\.([^\.]+)">
            ...
        </matcher>
    </trigger>

    This trigger will be enabled if the full qualified name (fqn) of the declaring input class matches the given regular expression (value).

  • type 'package' → package name of the input

    Listing 108. Example of a java trigger definition with a package name matcher
    <trigger id="..." type="java" templateFolder="...">
        <matcher type="package" value="(.+)\.persistence\.([^\.]+)\.entity">
            ...
        </matcher>
    </trigger>

    This trigger will be enabled if the package name (package) of the declaring input class matches the given regular expression (value).

  • type 'expression'

    Listing 109. Example of a java trigger definition with a package name matcher
    <trigger id="..." type="java" templateFolder="...">
        <matcher type="expression" value="instanceof java.lang.String">
            ...
        </matcher>
    </trigger>

    This trigger will be enabled if the expression evaluates to true. Valid expressions are

  • instanceof fqn: checks an 'is a' relation of the input type

  • isAbstract: checks, whether the input type is declared abstract

ContainerMatcher types

Additionally, the java plugin provides the ability to match packages (containers) as follows:

  • type 'package'

    Listing 110. Example of a java trigger definition with a container matcher for packages
    <trigger id="..." type="java" templateFolder="...">
        <containerMatcher type="package" value="com\.example\.app\.component1\.persistence.entity" />
    </trigger>

    The container matcher matches packages provided by the type com.capgemini.cobigen.javaplugin.inputreader.to.PackageFolder with a regular expression stated in the value attribute. (See containerMatcher semantics to get more information about containerMatchers itself.)

VariableAssignment types

Furthermore, it provides the ability to extract information from each input for further processing in the templates. The values assigned by variable assignments will be made available in template and the destinationPath of context.xml through the namespace variables.<key>. The Java Plug-in currently provides two different mechanisms:

  • type 'regex' → regular expression group

    <trigger id="..." type="java" templateFolder="...">
        <matcher type="fqn" value="(.+)\.persistence\.([^\.]+)\.entity\.([^\.]+)">
            <variableAssignment type="regex" key="rootPackage" value="1" />
            <variableAssignment type="regex" key="component" value="2" />
            <variableAssignment type="regex" key="pojoName" value="3" />
        </matcher>
    </trigger>

This variable assignment assigns the value of the given regular expression group number to the given key.

  • type 'constant' → constant parameter

    <trigger id="..." type="java" templateFolder="...">
        <matcher type="fqn" value="(.+)\.persistence\.([^\.]+)\.entity\.([^\.]+)">
            <variableAssignment type="constant" key="domain" value="restaurant" />
        </matcher>
    </trigger>

This variable assignment assigns the value to the key as a constant.

Java input reader

The Cobigen Java Plug-in implements an input reader for parsed java sources as well as for java Class<?> objects (loaded by reflection). So API user can pass Class<?> objects as well as JavaClass objects for generation. The latter depends on QDox, which will be used for parsing and merging java sources. For getting the right parsed java inputs you can easily use the JavaParserUtil, which provides static functionality to parse java files and get the appropriate JavaClass object.

Furthermore, due to restrictions on both inputs according to model building (see below), it is also possible to provide an array of length two as an input, which contains the Class<?> as well as the JavaClass object of the same class.

Template object model

No matter whether you use reflection objects or parsed java classes as input, you will get the following object model for template creation:

  • classObject ('Class' :: Class object of the Java input)

  • pojo

    • name ('String' :: Simple name of the input class)

    • package ('String' :: Package name of the input class)

    • canonicalName ('String' :: Full qualified name of the input class)

    • annotations ('Map<String, Object>' :: Annotations, which will be represented by a mapping of the full qualified type of an annotation to its value. To gain template compatibility, the key will be stored with '_' instead of '.' in the full qualified annotation type. Furthermore, the annotation might be recursively defined and thus be accessed using the same type of mapping. Example ${pojo.annotations.javax_persistence_Id})

    • javaDoc ('Map<String, Object>') :: A generic way of addressing all available javaDoc doclets and comments. The only fixed variable is comment (see below). All other provided variables depend on the doclets found while parsing. The value of a doclet can be accessed by the doclets name (e.g. ${…​javaDoc.author}). In case of doclet tags that can be declared multiple times (currently @param and @throws), you will get a map, which you access in a specific way (see below).

      • comment ('String' :: javaDoc comment, which does not include any doclets)

      • params ('Map<String,String> :: javaDoc parameter info. If the comment follows proper conventions, the key will be the name of the parameter and the value being its description. You can also access the parameters by their number, as in arg0, arg1 etc, following the order of declaration in the signature, not in order of javadoc)

      • throws ('Map<String,String> :: javaDoc exception info. If the comment follows proper conventions, the key will be the name of the thrown exception and the value being its description)

    • extendedType ('Map<String, Object>' :: The supertype, represented by a set of mappings (since cobigen-javaplugin v1.1.0)

      • name ('String' :: Simple name of the supertype)

      • canonicalName ('String' :: Full qualified name of the supertype)

      • package ('String' :: Package name of the supertype)

    • implementedTypes ('List<Map<String, Object>>' :: A list of all implementedTypes (interfaces) represented by a set of mappings (since cobigen-javaplugin v1.1.0)

      • interface ('Map<String, Object>' :: List element)

        • name ('String' :: Simple name of the interface)

        • canonicalName ('String' :: Full qualified name of the interface)

        • package ('String' :: Package name of the interface)

    • fields ('List<Map<String, Object>>' :: List of fields of the input class) (renamed since cobigen-javaplugin v1.2.0; previously attributes)

      • field ('Map<String, Object>' :: List element)

        • name ('String' :: Name of the Java field)

        • type ('String' :: Type of the Java field)

        • canonicalType ('String' :: Full qualified type declaration of the Java field’s type)

        • 'isId' ('Deprecated' :: 'boolean' :: true if the Java field or its setter or its getter is annotated with the javax.persistence.Id annotation, false otherwise. Equivalent to ${pojo.attributes[i].annotations.javax_persistence_Id?has_content})

        • javaDoc (see pojo.javaDoc)

        • annotations (see pojo.annotations with the remark, that for fields all annotations of its setter and getter will also be collected)

    • methodAccessibleFields ('List<Map<String, Object>>' :: List of fields of the input class or its inherited classes, which are accessible using setter and getter methods)

      • same as for field (but without javaDoc!)

    • methods ('List<Map<String, Object>>' :: The list of all methods, whereas one method will be represented by a set of property mappings)

      • method ('Map<String, Object>' :: List element)

        • name ('String' :: Name of the method)

        • javaDoc (see pojo.javaDoc)

        • annotations (see pojo.annotations)

Furthermore, when providing a Class<?> object as input, the Java Plug-in will provide additional functionalities as template methods (deprecated):

  1. isAbstract(String fqn) (Checks whether the type with the given full qualified name is an abstract class. Returns a boolean value.) (since cobigen-javaplugin v1.1.1) (deprecated)

  2. isSubtypeOf(String subType, String superType) (Checks whether the subType declared by its full qualified name is a sub type of the superType declared by its full qualified name. Equals the Java expression subType instanceof superType and so also returns a boolean value.) (since cobigen-javaplugin v1.1.1) (deprecated)

Model Restrictions

As stated before both inputs (Class<?> objects and JavaClass objects ) have their restrictions according to model building. In the following these restrictions are listed for both models, the ParsedJava Model which results from an JavaClass input and the ReflectedJava Model, which results from a Class<?>` input.

It is important to understand, that these restrictions are only present if you work with either Parsed Model OR the Reflected Model. If you use the Maven Build Plug-in or Eclipse Plug-in these two models are merged together so that they can mutually compensate their weaknesses.

Parsed Model
  • annotations of the input’s supertype are not accessible due to restrictions in the QDox library. So pojo.methodAccessibleFields[i].annotations will always be empty for super type fields.

  • annotations' parameter values are available as Strings only (e.g. the Boolean value true is transformed into "true"). This also holds for the Reflected Model.

  • fields of "supersupertypes" of the input JavaClass are not available at all. So pojo.methodAccessibleFields will only contain the input type’s and the direct superclass’s fields.

  • [resolved, since cobigen-javaplugin 1.3.1] field types of supertypes are always canonical. So pojo.methodAccessibleFields[i].type will always provide the same value as pojo.methodAccessibleFields[i].canonicalType (e.g. java.lang.String instead of the expected String) for super type fields.

Reflected Model
  • annotations' parameter values are available as Strings only (e.g. the Boolean value true is transformed into "true"). This also holds for the Parsed Model.

  • annotations are only available if the respective annotation has @Retention(value=RUNTIME), otherwise the annotations are to be discarded by the compiler or by the VM at run time. For more information see RetentionPolicy.

  • information about generic types is lost. E.g. a field’s/ methodAccessibleField’s type for List<String> can only be provided as List<?>.

Merger extensions

The Java Plug-in provides two additional merging strategies for Java sources, which can be configured in the templates.xml:

  • Merge strategy javamerge (merges two Java resources and keeps the existing Java elements on conflicts)

  • Merge strategy javamerge_override (merges two Java resources and overrides the existing Java elements on conflicts)

In general merging of two Java sources will be processed as follows:

Precondition of processing a merge of generated contents and existing ones is a common Java root class resp. surrounding class. If this is the case this class and all further inner classes will be merged recursively. Therefore, the following Java elements will be merged and conflicts will be resolved according to the configured merge strategy:

  • extends and implements relations of a class: Conflicts can only occur for the extends relation.

  • Annotations of a class: Conflicted if an annotation declaration already exists.

  • Fields of a class: Conflicted if there is already a field with the same name in the existing sources. (Will be replaced / ignored in total, also including annotations)

  • Methods of a class: Conflicted if there is already a method with the same signature in the existing sources. (Will be replaced / ignored in total, also including annotations)

69.2.2. Property Plug-in

The CobiGen Property Plug-in currently only provides different merge mechanisms for documents written in Java property syntax.

Merger extensions

There are two merge strategies for Java properties, which can be configured in the templates.xml:

  • Merge strategy propertymerge (merges two properties documents and keeps the existing properties on conflicts)

  • Merge strategy propertymerge_override (merges two properties documents and overrides the existing properties on conflicts)

Both documents (base and patch) will be parsed using the Java 7 API and will be compared according their keys. Conflicts will occur if a key in the patch already exists in the base document.

69.2.3. XML Plug-in

The CobiGen XML Plug-in comes with an input reader for xml artifacts, xml related trigger and matchers and provides different merge mechanisms for XML result documents.

Trigger extension

(since cobigen-xmlplugin v2.0.0)

The XML Plug-in provides a trigger for xml related inputs. It accepts xml documents as input (see XML input reader) and provides additional matching and variable assignment mechanisms. The configuration in the context.xml for this trigger looks like this:

  • type 'xml'

    Listing 111. Example of a xml trigger definition.
    <trigger id="..." type="xml" templateFolder="...">
        ...
    </trigger>

    This trigger type enables xml documents as inputs.

  • type 'xpath'

    Listing 112. Example of a xpath trigger definition.
    <trigger id="..." type="xpath" templateFolder="...">
        ...
    </trigger>

    This trigger type enables xml documents as container inputs, which consists of several subdocuments.

ContainerMatcher type

A ContainerMatcher check if the input is a valid container.

  • xpath: type: 'xpath'

    Listing 113. Example of a xml trigger definition with a nodename matcher.
    <trigger id="..." type="xml" templateFolder="...">
        <containerMatcher type="xpath" value="./uml:Model//packagedElement[@xmi:type='uml:Class']">
            ...
        </matcher>
    </trigger>

    Before applying any Matcher, this containerMatcher checks if the XML file contains a node "uml:Model" with a childnode "packagedElement" which contains an attribute "xmi:type" with the value "uml:Class".

Matcher types

With the trigger you might define matchers, which restrict the input upon specific aspects:

  • xml: type 'nodename' → document’s root name matching

    Listing 114. Example of a xml trigger definition with a nodename matcher
    <trigger id="..." type="xml" templateFolder="...">
        <matcher type="nodename" value="\D\w*">
            ...
        </matcher>
    </trigger>

    This trigger will be enabled if the root name of the declaring input document matches the given regular expression (value).

  • xpath: type: 'xpath' → matching a node with a xpath value

    Listing 115. Example of a xpath trigger definition with a xpath matcher.
    <trigger id="..." type="xml" templateFolder="...">
        <matcher type="xpath" value="/packagedElement[@xmi:type='uml:Class']">
            ...
        </matcher>
    </trigger>

    This trigger will be enabled if the XML file contains a node "/packagedElement" where the "xmi:type" property equals "uml:Class".

VariableAssignment types

Furthermore, it provides the ability to extract information from each input for further processing in the templates. The values assigned by variable assignments will be made available in template and the destinationPath of context.xml through the namespace variables.<key>. The XML Plug-in currently provides only one mechanism:

  • type 'constant' → constant parameter

    <trigger id="..." type="xml" templateFolder="...">
        <matcher type="nodename" value="\D\w*">
            <variableAssignment type="constant" key="domain" value="restaurant" />
        </matcher>
    </trigger>

This variable assignment assigns the value to the key as a constant.

XML input reader

The Cobigen XML Plug-in implements an input reader for parsed xml documents. So API user can pass org.w3c.dom.Document objects for generation. For getting the right parsed xml inputs you can easily use the xmlplugin.util.XmlUtil, which provides static functionality to parse xml files or input streams and get the appropriate Document object.

Template object

Due to the heterogeneous structure an xml document can have, the xml input reader does not always create exactly the same model structure (in contrast to the java input reader). For example the model’s depth differs strongly, according to it’s input document. To allow navigational access to the nodes, the model also depends on the document’s element’s node names. All child elements with unique names, are directly accessible via their names. In addition it is possible to iterate over all child elements with held of the child list Children. So it is also possible to access child elements with non unique names.

The XML input reader will create the following object model for template creation (EXAMPLEROOT, EXAMPLENODE1, EXAMPLENODE2, EXAMPLEATTR1,…​ are just used here as examples. Of course they will be replaced later by the actual node or attribute names):

  • ~EXAMPLEROOT~ ('Map<String, Object>' :: common element structure)

    • _nodeName_ ('String' :: Simple name of the root node)

    • _text_ ('String' :: Concatenated text content (PCDATA) of the root node)

    • TextNodes ('List<String>' :: List of all the root’s text node contents)

    • _at_~EXAMPLEATTR1~ ('String' :: String representation of the attribute’s value)

    • _at_~EXAMPLEATTR2~ ('String' :: String representation of the attribute’s value)

    • _at_…​

    • Attributes ('List<Map<String, Object>>' :: List of the root’s attributes

      • at ('Map<String, Object>' :: List element)

        • _attName_ ('String' :: Name of the attribute)

        • _attValue_ ('String' :: String representation of the attribute’s value)

    • Children ('List<Map<String, Object>>' :: List of the root’s child elements

      • child ('Map<String, Object>' :: List element)

        • …​common element sub structure…​

    • ~EXAMPLENODE1~ ('Map<String, Object>' :: One of the root’s child nodes)

      • …​common element structure…​

    • ~EXAMPLENODE2~ ('Map<String, Object>' :: One of the root’s child nodes)

      • …​common element sub structure…​

      • ~EXAMPLENODE21~ ('Map<String, Object>' :: One of the nodes' child nodes)

        • …​common element structure…​

      • ~EXAMPLENODE…​~

    • ~EXAMPLENODE…​~

In contrast to the java input reader, this xml input reader does currently not provide any additional template methods.

Merger extensions

The XML plugin uses the LeXeMe merger library to produce semantically correct merge products. The merge strategies can be found in the MergeType enum and can be configured in the templates.xml as a mergeStrategy attribute:

  • mergeStrategy 'xmlmerge'

    Listing 116. Example of a template using the mergeStrategy xmlmerge
    <templates>
    	<template name="..." destinationPath="..." templateFile="..." mergeStrategy="xmlmerge"/>
    </templates>

Currently only the document types included in LeXeMe are supported. On how the merger works consult the LeXeMe Wiki.

69.2.4. Text Merger Plug-in

The Text Merger Plug-in enables merging result free text documents to existing free text documents. Therefore, the algorithms are also very rudimentary.

Merger extensions

There are currently three main merge strategies that apply for the whole document:

  • merge strategy textmerge_append (appends the text directly to the end of the existing document) _Remark_: If no anchors are defined, this will simply append the patch.

  • merge strategy textmerge_appendWithNewLine (appends the text after adding a new line break to the existing document) _Remark_: empty patches will not result in appending a new line any more since v1.0.1 Remark: Only suitable if no anchors are defined, otherwise it will simply act as textmerge_append

  • merge strategy textmerge_override (replaces the contents of the existing file with the patch) _Remark_: If anchors are defined, override is set as the default mergestrategy for every text block if not redefined in an anchor specification.

Anchor functionality

If a template contains text that fits the definition of anchor:${documentpart}:${mergestrategy}:anchorend or more specifically the regular expression (.*)anchor:([:]+):(newline_)?([:]+)(_newline)?:anchorend\\s*(\\r\\n|\\r|\\n), some additional functionality becomes available about specific parts of the incoming text and the way it will be merged with the existing text. These anchors always change things about the text to come up until the next anchor, text before it is ignored.

If no anchors are defined, the complete patch will be appended depending on your choice for the template in the file templates.xml.

Anchor Definition

Anchors should always be defined as a comment of the language the template results in, as you do not want them to appear in your readable version, but cannot define them as freemarker comments in the template, or the merger will not know about them. Anchors will also be read when they are not comments due to the merger being able to merge multiple types of text-based languages, thus making it practically impossible to filter for the correct comment declaration. That is why anchors have to always be followed by line breaks. That way there is a universal way to filter anchors that should have anchor functionality and ones that should appear in the text. Remark: If the resulting language has closing tags for comments, they have to appear in the next line. Remark: If you do not put the anchor into a new line, all the text that appears before it will be added to the anchor.

Documentparts

In general, ${documentpart} is an id to mark a part of the document, that way the merger knows what parts of the text to merge with which parts of the patch (e.g. if the existing text contains anchor:table:${}:anchorend that part will be merged with the part tagged anchor:table:${}:anchorend of the patch).

If the same documentpart is defined multiple times, it can lead to errors, so instead of defining table multiple times, use table1, table2, table3 etc.

If a ${documentpart} is defined in the document but not in the patch and they are in the same position, it is processed in the following way: If only the documentparts header, test and footer are defined in the document in that order, and the patch contains header, order and footer, the resulting order will be header, test, order then footer.

The following documentparts have default functionality:

  1. anchor:header:${mergestrategy}:anchorend marks the beginning of a header, that will be added once when the document is created, but not again. Remark: This is only done once, if you have header in another anchor, it will be ignored

  2. anchor:footer:${mergestrategy}:anchorend marks the beginning of a footer, that will be added once when the document is created, but not again. Once this is invoked, all following text will be included in the footer, including other anchors.

Mergestrategies

Mergestrategies are only relevant in the patch, as the merger is only interested in how text in the patch should be managed, not how it was managed in the past.

  1. anchor:${documentpart}::anchorend will use the merge strategy from templates.xml, see Merger-Extensions.

  2. anchor:${}:${mergestrategy}_newline:anchorend or anchor:${}:newline_${mergestrategy}:anchorend states that a new line should be appended before or after this anchors text, depending on where the newline is (before or after the mergestrategy). anchor:${documentpart}:newline:anchorend puts a new line after the anchors text. Remark: Only works with appending strategies, not merging/replacing ones. These strategies currently include: appendbefore, append/appendafter

  3. anchor:${documentpart}:override:anchorend means that the new text of this documentpart will replace the existing one completely

  4. anchor:${documentpart}:appendbefore:anchorend or anchor:${documentpart}:appendafter:anchorend/anchor:${documentpart}:append:anchorend specifies whether the text of the patch should come before the existing text or after.

Usage Examples
General

Below you can see how a file with anchors might look like (using Asciidoc comment tags), with examples of what you might want to use the different functions for.

// anchor:header:append:anchorend

Table of contents
Introduction/Header

// anchor:part1:appendafter:anchorend

Lists
Table entries

// anchor:part2:nomerge:anchorend

Document Separators
Asciidoc table definitions

// anchor:part3:override:anchorend

Anything that you only want once but changes from time to time

// anchor:footer:append:anchorend

Copyright Info
Imprint
Merging

In this section you will see a comparison on what files look like before and after merging

override
Listing 117. Before
// anchor:part:override:anchorend
Lorem Ipsum
Listing 118. Patch
// anchor:part:override:anchorend
Dolor Sit
Listing 119. After
// anchor:part:override:anchorend
Dolor Sit
Appending
Listing 120. Before
// anchor:part:append:anchorend
Lorem Ipsum
// anchor:part2:appendafter:anchorend
Lorem Ipsum
// anchor:part3:appendbefore:anchorend
Lorem Ipsum
Listing 121. Patch
// anchor:part:append:anchorend
Dolor Sit
// anchor:part2:appendafter:anchorend
Dolor Sit
// anchor:part3:appendbefore:anchorend
Dolor Sit
Listing 122. After
// anchor:part:append:anchorend
Lorem Ipsum
Dolor Sit
// anchor:part2:appendafter:anchorend
Lorem Ipsum
Dolor Sit
// anchor:part3:appendbefore:anchorend
Dolor Sit
Lorem Ipsum
Newline
Listing 123. Before
// anchor:part:newline_append:anchorend
Lorem Ipsum
// anchor:part:append_newline:anchorend
Lorem Ipsum
(end of file)
Listing 124. Patch
// anchor:part:newline_append:anchorend
Dolor Sit
// anchor:part:append_newline:anchorend
Dolor Sit
(end of file)
Listing 125. After
// anchor:part:newline_append:anchorend
Lorem Ipsum

Dolor Sit
// anchor:part:append_newline:anchorend
Lorem Ipsum
Dolor Sit

(end of file)
Error List
  • If there are anchors in the text, but either base or patch do not start with one, the merging process wil be aborted, as text might go missing this way.

  • Using _newline or newline_ with mergestrategies that don’t support it , like override, will abort the merging process. See Merge Strategies →2 for details.

  • Using undefined mergestrategies will abort the merging process.

  • Wrong anchor definitions, for example anchor:${}:anchorend will abort the merging process, see Anchor Definition for details.

69.2.5. JSON Plug-in

At the moment the plug-in can be used for merge generic JSOn files depending on the merge strategy defined at the templates.

Merger extensions

There are currently these merge strategies:

Generic JSON Merge

  • merge strategy jsonmerge(add the new code respecting the existent is case of conflict)

  • merge strategy jsonmerge_override (add the new code overwriting the existent in case of conflict)

    1. JsonArray’s will be ignored / replaced in total

    2. JsonObjects in conflict will be processed recursively ignoring adding non existent elements.

Merge Process
Generic JSON Merging

The merge process will be:

  1. Add non existent JSON Objects from patch file to base file.

  2. For existent object in both files, will add non existent keys from patch to base object. This process will be done recursively for all existent objects.

  3. For Json Arrays existent in both files, the arrays will be just concatenated.

69.2.6. TypeScript Plug-in

The TypeScript Plug-in enables merging result TS files to existing ones. This plug-in is used at the moment for generate an Angular2 client with all CRUD functionalities enabled. The plug-in also generates de i18n functionality just appending at the end of the word the ES or EN suffixes, to put into the developer knowledge that this words must been translated to the correspondent language. Currently, the generation of Angular2 client requires an ETO java object as input so, there is no need to implement an input reader for ts artifacts for the moment.

Trigger Extensions

As for the Angular2 generation the input is a java object, the trigger expressions (including matchers and variable assignments) are implemented as Java.

Merger extensions

This plugin uses the OASP TypeScript Merger to merge files. There are currently two merge strategies:

  • merge strategy tsmerge (add the new code respecting the existing is case of conflict)

  • merge strategy tsmerge_override (add the new code overwriting the existent in case of conflict)

The merge algorithm mainly handles the following AST nodes:

  • ImportDeclaration

    • Will add non existent imports whatever the merge strategy is.

    • For different imports from same module, the import clauses will be merged.

      import { a } from 'b';
      import { c } from 'b';
      //Result
      import { a, c } from 'b';
  • ClassDeclaration

    • Adds non existent base properties from patch based on the name property.

    • Adds non existent base methods from patch based on the name signature.

    • Adds non existent annotations to class, properties and methods.

  • PropertyDeclaration

    • Adds non existent decorators.

    • Merge existent decorators.

    • With override strategy, the value of the property will be replaced by the patch value.

  • MethodDeclaration

    • With override strategy, the body will be replaced.

    • The parameters will be merged.

  • ParameterDeclaration

    • Replace type and modifiers with override merge strategy, adding non existent from patch into base.

  • ConstructorDeclaration

    • Merged in the same way as Method is.

  • FunctionDeclaration

    • Merged in the same way as Method is.

Input reader

The TypeScript input reader is based on the one that the TypeScript merger uses. The current extensions are additional module fields giving from which library any entity originates. module: null specifies a standard entity or type as string or number.

Object model

To get a first impression of the created object after parsing, let us start with analyzing a small example, namely the parsing of a simple type-orm model written in TypeScript.

import {Entity, PrimaryGeneratedColumn, Column} from "typeorm";

@Entity()
export class User {

    @PrimaryGeneratedColumn()
    id: number;

    @Column()
    firstName: string;

    @Column()
    lastName: string;

    @Column()
    age: number;

}

The returned object has the following structure

{
  "importDeclarations": [
    {
      "module": "typeorm",
      "named": [
        "Entity",
        "PrimaryGeneratedColumn",
        "Column"
      ],
      "spaceBinding": true
    }
  ],
  "classes": [
    {
      "identifier": "User",
      "modifiers": [
        "export"
      ],
      "decorators": [
        {
          "identifier": {
            "name": "Entity",
            "module": "typeorm"
          },
          "isCallExpression": true
        }
      ],
      "properties": [
        {
          "identifier": "id",
          "type": {
            "name": "number",
            "module": null
          },
          "decorators": [
            {
              "identifier": {
                "name": "PrimaryGeneratedColumn",
                "module": "typeorm"
              },
              "isCallExpression": true
            }
          ]
        },
        {
          "identifier": "firstName",
          "type": {
            "name": "string",
            "module": null
          },
          "decorators": [
            {
              "identifier": {
                "name": "Column",
                "module": "typeorm"
              },
              "isCallExpression": true
            }
          ]
        },
        {
          "identifier": "lastName",
          "type": {
            "name": "string",
            "module": null
          },
          "decorators": [
            {
              "identifier": {
                "name": "Column",
                "module": "typeorm"
              },
              "isCallExpression": true
            }
          ]
        },
        {
          "identifier": "age",
          "type": {
            "name": "number",
            "module": null
          },
          "decorators": [
            {
              "identifier": {
                "name": "Column",
                "module": "typeorm"
              },
              "isCallExpression": true
            }
          ]
        }
      ]
    }
  ]
}

If we only consider the first level of the JSON response, we spot two lists of imports and classes, providing information about the only import statement and the only User class, respectively. Moving one level deeper we observe that:

  • Every import statement is translated to an import declaration entry in the declarations list, containing the module name, as well as a list of entities imported from the given module.

  • Every class entry provides besides the class identifier, its decoration(s), modifier(s), as well as a list of properties that the original class contains.

Note that, for each given type, the module from which it is imported is also given as in

  "identifier": {
    "name": "Column",
    "module": "typeorm"
  }

Returning to the general case, independently from the given TypeScript file, an object having the following Structure will be created.

  • importDeclarations: A list of import statement as described above

  • exportDeclarations: A list of export declarations

  • classes: A list of classes extracted from the given file, where each entry is full of class specific fields, describing its properties and decorator for example.

  • interfaces: A list of interfaces.

  • variables: A list of variables.

  • functions: A list of functions.

  • enums: A list of enumerations.

69.2.7. HTML Plug-in

The HTML Plug-in enables merging result HTML files to existing ones. This plug-in is used at the moment for generate an Angular2 client. Currently, the generation of Angular2 client requires an ETO java object as input so, there is no need to implement an input reader for ts artifacts for the moment.

Trigger Extensions

As for the Angular2 generation the input is a java object, the trigger expressions (including matchers and variable assignments) are implemented as Java.

Merger extensions

There are currently two merge strategies:

  • merge strategy html-ng* (add the new code respecting the existing is case of conflict)

  • merge strategy html-ng*_override (add the new code overwriting the existent in case of conflict)

The merging of two Angular2 files will be processed as follows:

The merge algorithm handles the following AST nodes:

  • md-nav-list

  • a

  • form

  • md-input-container

  • input

  • name (for name attribute)

  • ngIf

Warning
Be aware, that the HTML merger is not generic and only handles the described tags needed for merging code of a basic Angular client implementation. For future versions, it is planned to implement a more generic solution.
Last updated 2021-03-03 14:52:12 UTC