There are many guides on particular applications of computational principles for engineering. It doesn’t take more than a quick search engine entry to find a plethora of links to topics such as the Newton-Raphson method, applications and how-to tips and tricks for the latest engineering calculation and simulation software suites, and so on. Because of this, before I move on to making a post about some applications of specific software packages, I’d like to take a step back and do a quick review of what computers fundamentally do before getting caught up in the storm of application-specific posts and tutorials.

At the most basic level, computers

  1. perform calculations,
  2. retain results.

It seems overly simple given the seemingly vast capabilities of your everyday 21st century computer, but that is all they actually do. The types of calculations a computer can perform is determined by the calculation capabilities built in and the presence of various user-defined functions. But if that is all, how can computers do what they can these days then? The short, sweet answer would be by algorithm design. An algorithm is a set of instructions (the computer’s equivalent of ‘imperative knowledge’) designed to carry out a certain task, such as calculate the natural log of a number or help connect one computer to another in a network. Many disciplines (especially STEM fields) as well as many aspects of modern life revolve around efficiently designed algorithms. In a stored-program computer (one that is configurable, like most computers are today), algorithms are written and loaded via what everyone knows as a programming language. This is where all the buzz about learning how to code shows up.

Different programming languages are more suited to different tasks, due to the nature of their design. MATLAB, for example, is excellent for matrix manipulation, while Python is a general-purpose language that is also well-suited for data visualization. However, at least in theory, anything that can be computed in one language can be computed in another. This is one way of interpreting the term Turing complete (which all the popular languages are, I’m pretty sure). As you may have guessed, that term originates from the work of one of the great forefathers of computing, Alan Turing.

Armed with a bunch of programming languages, humans started to make everything from traffic-control systems to codes that solve computational fluid dynamics problems. Engineers, for example, now typically study at least one language in college (usually MATLAB), as well as numerical methods to understand the nature of computation (starting all the way from floating-point numbers and mantissas) and avoid potentially disastrous errors. And so, as an engineer myself and an apologetic coder, I hope that retaining these small ‘bits’ of information about what computers actually do and how they do it can help with understanding that many engineering software packages are actually much more similar to each other than they look. I also hope we will all appreciate that a good working knowledge of computation and scripting can go a long way in STEM (especially when computing knowledge is underemphasized in places like Myanmar for engineers). To this end, in the following weeks I will try to post a little more about computational concepts being applied in engineering analysis.



Main source:

Guttag, John. Introduction to computation and programming using Python: with application to understanding data. Cambridge, MA: The MIT Press, 2017.