Displaying items by tag: opensource

DSLs are used for a specific context in a particular domain. Learn more about what they are and why you might want to use one.



domain-specific language (DSL) is a language meant for use in the context of a particular domain. A domain could be a business context (e.g., banking, insurance, etc.) or an application context (e.g., a web application, database, etc.) In contrast, a general-purpose language (GPL) can be used for a wide range of business problems and applications.

A DSL does not attempt to please all. Instead, it is created for a limited sphere of applicability and use, but it's powerful enough to represent and address the problems and solutions in that sphere. A good example of a DSL is HTML. It is a language for the web application domain. It can't be used for, say, number crunching, but it is clear how widely used HTML is on the web.

A GPL creator does not know where the language might be used or the problems the user intends to solve with it. So, a GPL is created with generic constructs that potentially are usable for any problem, solution, business, or need. Java is a GPL, as it's used on desktops and mobile devices, embedded in the web across banking, finance, insurance, manufacturing, etc., and more.

Classifying DSLs

In the DSL world, there are two types of languages:

  • Domain-specific language (DSL): The language in which a DSL is written or presented
  • Host language: The language in which a DSL is executed or processed

A DSL written in a distinct language and processed by another host language is called an external DSL.

This is a DSL in SQL that can be processed in a host language:

SELECT account
FROM accounts
WHERE account = '123' AND branch = 'abc' AND amount >= 1000



For that matter, a DSL could be written in English with a defined vocabulary and form that can be processed in another host language using a parser generator like ANTLR:

if smokes then increase premium by 10%

If the DSL and host language are the same, then the DSL type is internal, where the DSL is written in the language's semantics and processed by it. These are also referred to as embedded DSLs. Here are two examples.

  • A Bash DSL that can be executed in a Bash engine:
    if today_is_christmas; then apply_christmas_discount; fi
    This is valid Bash that is written like English.
  • A DSL written in a GPL like Java:
    orderValue = orderValue
    This uses a fluent style and is readable like English.

Yes, the boundaries between DSL and GPL sometimes blur.



DSL examples

Some languages used for DSLs include:

  • Web: HTML
  • Shell: sh, Bash, CSH, and the likes for *nix; MS-DOS, Windows Terminal, PowerShell for Windows
  • Markup languages: XML
  • Modeling: UML
  • Data management: SQL and its variants
  • Business rules: Drools
  • Hardware: Verilog, VHD
  • Build tools: Maven, Gradle
  • Numerical computation and simulation: MATLAB (commercial), GNU Octave, Scilab
  • Various types of parsers and generators: Lex, YACC, GNU Bison, ANTLR


Why DSL?

The purpose of a DSL is to capture or document the requirements and behavior of one domain. A DSL's usage might be even narrower for particular aspects within the domain (e.g., commodities trading in finance). DSLs bring business and technical teams together. This does not imply a DSL is for business use alone. For example, designers and developers can use a DSL to represent or design an application.

A DSL can also be used to generate source code for an addressed domain or problem. However, code generation from a DSL is not considered mandatory, as its primary purpose is domain knowledge. However, when it is used, code generation is a serious advantage in domain engineering.


DSL pros and cons

On the plus side, DSLs are powerful for capturing a domain's attributes. Also, since DSLs are small, they are easy to learn and use. Finally, a DSL offers a language for domain experts and between domain experts and developers.

On the downside, a DSL is narrowly used within the intended domain and purpose. Also, a DSL has a learning curve, although it may not be very high. Additionally, although there may be advantages to using tools for DSL capture, they are not essential, and the development or configuration of such tools is an added effort. Finally, DSL creators need domain knowledge as well as language-development knowledge, and individuals rarely have both.



DSL software options


Open source DSL software options include:

  • Xtext: Xtext enables the development of DSLs and is integrated with Eclipse. It makes code generation possible and has been used by several open source and commercial products to provide specific functions. MADS (Multipurpose Agricultural Data System) is an interesting idea based on Xtext for "modeling and analysis of agricultural activities" (however, the project seems to be no longer active).
  • JetBrains MPS: JetBrains MPS is an integrated development environment (IDE) to create DSLs. It calls itself a projectional editor that stores a document as its underlying abstract tree structure. (This concept is also used by programs such as Microsoft Word.) JetBrains MPS also supports code generation to Java, C, JavaScript, or XML.


DSL best practices

Want to use a DSL? Here are a few tips:

  • DSLs are not GPLs. Try to address limited ranges of problems in the definitive domain.
  • You do not need to define your own DSL. That would be tedious. Look for an existing DSL that solves your need on sites like DSLFIN, which lists DSLs for the finance domain. If you are unable to find a suitable DSL, you could define your own.
  • It is better to make DSLs "like English" rather than too technical.
  • Code generation from a DSL is not mandatory, but it offers significant and productive advantages when it is done.
  • DSLs are called languages but, unlike GPLs, they need not be executable. Being executable is not the intent of a DSL.
  • DSLs can be written with word processors. However, using a DSL editor makes syntax and semantics checks easier.

If you are using DSL now or plan to do so in the future, please share your experience in the comments.



Published in GNU/Linux Rules!

D4E 02 WaysOfSeeing ExploringArt 1920 v1.0In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.



Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of open source as the path to innovation resonates on many levels.  

In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, who gave a keynote address at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.

One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.

“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”

Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.

Me: Why is open source central to innovation in general for telecommunications service providers?

Nadeau: The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.

And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.

Me: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.

Nadeau: Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.

There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.

NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.

You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.

But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.  

Me: Tell us about the underlying Linux in NFV, and why that combo is so powerful.

Nadeau: Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.

Me: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?

Nadeau: Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.

These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.


Published in GNU/Linux Rules!