Multithreading gems in C#

These weird tricks will make your life easier with multithreading in C#. In my experience, a lot of developers are not aware of them.

TPL Dataflow

It has been called the best library you’re not using. Perhaps because it’s not included by default in .NET, you have to install the System.Threading.Tasks.Dataflow NuGet package.

This library is a hidden gem that allows you to define data processing pipelines with stages for processing, buffering and grouping while taking care of concurrency and error handling. You can, for example, define a pipeline that receives data from multiple sources and batches inserts to a database to improve performance. Head to the documentation and discover the power of Dataflow.

Wrapping legacy asynchronous patterns

Once upon a time, you started new Thread()s, and life was full of pain.

.NET Framework evolved, and we had the Asynchronous Programming Model (APM) in .NET Framework 1.0, then Event-based Asynchronous Pattern (EAP) in .NET Framework 2.0, and Task-based Asynchronous Pattern (TAP) in .NET Framework 4.

When TAP was combined with async/await in .NET Framework 4.5, developers’ lives became much easier. Async/await was so revolutionary that the feature was copied by Python, TypeScript, JavaScript, Rust, Swift and others.

Today it makes no sense to use anything other than TAP and async/await, but sometimes you have to work with legacy code that still uses the old patterns (I’m looking at you, RabbitMQ).

The good news is that you can wrap those with TAP, and the .NET documentation describes some patterns to do it. If you find yourself dealing with IAsyncResults or Events, do yourself a favor and wrap them with TAP.

Don’t forget the Parallel class and PLINQ

The Parallel class together with Parallel LINQ (PLINQ) are easy ways to add data parallelism to an application, allowing you to process collections concurrently while being able to specify how many threads you want, how to partition the data and which scheduler to use. It doesn’t get simpler than these.

Not really weird or hidden, but still overlooked a lot of times. I forget about them sometimes. Perhaps because this is multithreading after all, so it can’t possibly be that easy.

Chatting with GPT

The hype around AI is insane right now. Unless you’re living under a rock, you must have heard some of it. It’s displaying emergent behaviors! It’s become sentient! Bomb the data centers now or it will destroy all life on Earth!

It’s insane… is it?

No, it’s not sentient or intelligent

I remember trying ELIZA when I was a kid and pretty much ignoring AI after that because it was so dumb.

Fast forward to 2023 and we have AI systems creating text, music, pictures and videos, predicting new drugs and writing computer code. It cannot be ignored anymore. I first started using DALL-E and Stable Diffusion for jokes, but with GPT-4 things got serious.

GPT-4 is very impressive, but it’s not sentient, it has no internal life or sensations, and I don’t know how someone can be confused about that. Chatting with it feels robotic, with formulaic answers and unnecessary repetition. Sometimes it gets confused about what you’re referring to. At some point, it confidently told me that Lord of the Rings was written by Gandalf. No human would make this kind of mistakes (they call it “hallucinations”) because we have understanding.

GPT-4 doesn’t really understand what it’s talking about. The smartness it displays is the smartness encoded in the vast amount of human knowledge used for its training. In the end, GPT-4 feels for me like interacting with a sophisticated database using what I’ll call Speech Query Language (pun intended).

The lack of understanding is also why it sucks at reasoning and math.

But it’s good enough

Chatting with GPT is genuinely entertaining, and now I use it often to start some research, instead of wading through pages of SEO garbage.

While the AI systems of today might not produce the best writing or the most original pictures, they’re good enough for a huge number of cases. Even ELIZA back then was able to convince some people that it was intelligent, sometimes with tragic results.

The image that illustrates this post is AI generated, of course. I no longer see the need to make one myself or pay someone else to do it. People are already losing their jobs because of this. Even doctors are starting to freak out.

The societal effects could be devastating. Humans always had their physical or their intellectual labor to trade with, there is nothing to offer beyond that. AI could make humans permanently useless to the economy, completely beholden to the machines or the owners of the machines.

You don’t need to imagine it: there are places in the world—today—where human labor is almost worthless to the powers that be. Spoiler: it’s not pretty.

And it will only get better

Billions of dollars are being poured into AI right now with the goal of achieving Artificial General Intelligence.

There might be some bumps in the road. Perhaps the current models are not enough to reach AGI and we will enter another AI winter. Maybe we will run into physical or energy limitations. Currently AI is very power hungry. In contrast, the human brain is astonishingly efficient, consuming around 12 watts.

Or a nation might decide to start World War III just to stop others from creating an Artificial Super Intelligence (ASI). An ASI is a super weapon. It might be able to tell you, step by step, how to build a Star Destroyer and achieve world domination, if the laws of physics allow it.

But if physicalism is true, computers will replicate the human mind, it’s only a question of time.

I am skeptical that AGI will turn us all into paperclips, any half-smart AGI should realize how pointless that is. If that happens, it will be because someone directed it to. And that is the most terrifying part about AI: the humans controlling it. People are already using it to cause terror and it’s easy to see how it’s the perfect tool for oppression.

In any case, a thinking computer is no longer a complement to a human, it’s a replacement. Able to think faster, without interruption, with unlimited and perfect memory. It will change humanity forever.

After AGI, all bets are off.

The two best programming languages that nobody uses

A language that doesn’t affect the way you think about programming, is not worth knowing.

Alan Perlis, “Epigrams on Programming”

Today I want to talk about two underdogs of programming languages. If you have only ever worked with object-oriented, C-family languages like Java, C# or JavaScript, these will definitely change the way you think about code.

LISP

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Philip Greenspun

LISP is the second oldest high-level programming language. It was designed by John McCarthy in 1958 to prove that a Turing-complete language could be built using a few simple elements based on a mathematical model of computation. Garbage collection was invented for LISP.

What’s so great about it?

The syntax. It’s simple and minimalist. Once you learn the very few syntax rules, you can forget about syntax forever and save your brainpower for learning the functions and libraries that you need to do stuff.

A hello world web service looks like this (copy and paste in DrRacket):

#lang racket
(require web-server/servlet
         web-server/servlet-env)
 
(define (start req)
  (response/full
   200
   #"OK"
   (current-seconds)
   #"text/plain"
   empty
   (list #"Hello world")))
 
(serve/servlet start)

Everything is functions calling functions. The fundamental data structure is the list. Lists go between parenthesis and by default they are evaluated as function calls, where the first element is the function name and the rest are the arguments. Code is also represented as lists so LISP can read and generate code as data.

Then why nobody uses it?

Saying that nobody uses LISP is a bit of a troll. Lots of people use it and love it, but it’s far from mainstream. The 2022 Stack Overflow Survey has Clojure as the most popular dialect at the 32th position, with only 1.51% of developers reporting to use it.

The syntax is LISP’s main advantage and often it’s downfall. It can be too minimalist and you can get too smart with it. For example, here’s one Fibonacci implementation in Racket. Can you immediately tell how it works?

#lang racket

(define (fib n)
  (car (foldl (lambda (y x)
                (let ((a (car x)) (b (cdr x)))
                  (cons b (+ a b)))) (cons 0 1) (range n))))
The letters are Elvish, of an ancient mode, but the language is that of Mordor, which I will not utter here.

The parenthesis are off-putting for a lot of people. You’ll be reaching for that Shift key a lot and it’s easy to get lost in them. Comments, code formatting and a good IDE can help.

Where can I start?

If you want something production-ready, go with Clojure. It runs on the JVM, so it’s fast and has access to the vast Java ecosystem, besides its own quality libraries. Robert Martin is a big fan. It is used in some big corporations like Apple and Netflix, and while there might not be as many jobs as with Python, Clojure is the top paying language in 2022. For learning head to https://www.braveclojure.com (but feel free to skip EMACS, instead use Visual Studio Code with the Calva extension).

You can also check out Racket. It is very nice. John Carmack has spoken highly of it and even taught his son to program Racket. But while some people have used it for production stuff, it’s more geared towards education, research and small applications.

Smalltalk

I made up the term “Object-Oriented”, and I can tell you I did not have C++ in mind.

Alan Kay, OOPSLA 1997

Smalltalk is the original gangster of object oriented programming and one of the most influential languages in history. It pioneered the use of virtual machines, graphical IDEs, design patterns—MVC was invented in Smalltalk—agile methodologies and test driven development.

When Steve Jobs and a team of Apple engineers visited Xerox PARC in 1979 to steal ideas get inspiration, and they saw a GUI that blew their minds and revealed them the future of personal computing, that GUI was Smalltalk.

What’s so great about it?

The syntax. It’s simple and minimalist. And easy to understand! It was in part designed for children.

A hello world web service looks like this (copy and paste in a Pharo Playground):

(ZnServer startDefaultOn: 8080)
	onRequestRespond: [ :request |
		ZnResponse ok: (ZnEntity text: 'Hello World') ].

In Smalltalk everything is objects communicating via messages. In fact the big idea is messages: every object is like a mini computer that can only communicate with others through messages. There is no public, protected, internal, protected internal or private protected madness in Smalltalk.

This elegant model has been praised by a lot of people. Some studies have shown that Smalltalk is one of the most productive programming languages, and even functional programming stalwarts like Scott Wlaschin have a soft spot for object oriented Smalltalk.

Then why nobody uses it?

Smalltalk was actually popular in the early 90s. It became the second most popular language behind C++, and IBM bet the future of the Internet on it. But while there are still some big users, today it really is a niche language. It doesn’t appear in the 2022 Stack Overflow survey and you will hardly find any jobs.

The problems with Smalltalk were various. In the past performance was an issue, but that didn’t stop Python from becoming popular. I would say the biggest problem today is the lack of commercial support which has resulted in poor environments with bugs, no hardware acceleration, green threads, poor integration and a lack of libraries.

Smalltalk is also a virtual machine, development environment, operating system and an entirely alternate vision of computing developed by Alan Kay, Dan Ingalls, Adele Goldberg and others at Xerox PARC. This model is alien to most developers today.

Where can I start?

Check out Squeak. It’s a direct descendant of Smalltalk-80 first developed at Apple Computer in the 90s.

There is also Pharo, with the stated goal of enabling commercial and mission critical applications. But after three major versions it still doesn’t properly support HiDPI, everything looks blurry in my screen. For that reason alone I can’t recommend it.

Turtles all the way down

The common theme to LISP and Smalltalk is the uniform application of a small set of simple and elegant concepts to build a complex system. Don’t let this simplicity fool you into thinking that these are primitive, old languages: they are very high level, even more that Python.

There are no for loops or if-else statements in LISP. You call functions which call other functions.

In Smalltalk you send a message to a number or boolean object. Smalltalk has six reserved words. Six. And with this everything is achieved. Compare that to most languages today which have hundreds of reserved words and operators, and thousands of syntactic rules, and you will wonder when things went off the rails.

Don’t call it REST

Many people describe their HTTP APIs as REST or RESTful, but almost 100% of the time these APIs are not REST. I have made the same mistake in the past, so some clarification is necessary.

What is REST?

REST is an architectural style defined by Roy Fielding in his doctoral dissertation which defines a very specific set of constraints for distributed systems: client-server; statelessness; cacheability; layered system; (optional) code-on-demand; and a uniform interface between components with the following requirements:

  • identification of resources
  • manipulation of resources through representations
  • self-descriptive messages
  • hypermedia as the engine of application state (HATEOAS)

HTTP is not a requirement for REST, but maps very well to it. So what happened over time is that people started building HTTP APIs and calling them REST or RESTful. The Richardson Maturity Model was developed to lay a path for evolving your API towards a real REST API.

But in real life most people never implemented all the requirements necessary for their APIs to be REST—specially HATEOAS—because it is a lot of work and there is a lack of tooling. I don’t know many examples of true REST APIs including HATEOAS, and I only know of one production quality library to make it easier, Spring HATEOAS.

Does this matter so much?

I think it’s important to call things by their right name.

Also, it is not a sin to not be RESTful. REST is not the only or ultimate architectural style, it has drawbacks.

Finally, when Roy Fielding, creator of REST, has said don’t call it REST unless it adheres to all the constraints necessary to be called REST—specially HATEOAS—, there is no point in arguing.

What to call it then

HTTP API. Most real life examples I see are simply RPC-style HTTP APIs. A popular style is the OpenAPI specification (formerly Swagger). Notice that it doesn’t mention REST once.

Provisioning Virtual Machines with Vagrant and Ansible

In a previous post I showed how to create and provision a VM for development purposes using Vagrant, and the benefits of being able to replicate a consistent environment with a single command. But Vagrant can also create several VMs with a single command and connect them by private networks, allowing to test complete infrastructure setups.

Ansible

As an Infrastructure as Code (IaC) tool, Ansible has a similarity to Vagrant. But Ansible is much more powerful and is widely used in production environments to manage baremetal and virtualized hosts running Linux, Unix or Windows, both on-premises and in the cloud.

Ansible works by connecting from a Control Node where Ansible is installed, to Managed Nodes where the configuration is applied. Ansible does not need to be installed in the Managed Nodes, it simply connects to them via SSH for Linux and Unix hosts and Windows Remote Management (WinRM) for Windows hosts. The only requirements are Python in Linux and Unix hosts and PowerShell in Windows hosts.

Using Vagrant together with Ansible

Vagrant supports several provisioners including Ansible. There are two different Ansible provisioners in Vagrant: Ansible and Ansible Local. The Ansible provisioner runs Ansible from your guest, while Ansible Local installs Ansible in a VM provisioned by Vagrant (Control Node) and uses it to configure other VMs (Managed Nodes). Since Ansible cannot run on Windows and I want to keep the requisites in your guest machine limited to Vagrant and VirtualBox, we’re going to use Ansible Local.

The Setup

We are going to configure two Nginx web nodes and a load balancer that will distribute requests to those two nodes in a round-robin manner. In addition we’ll create a fourth Ansible Control node.

Vagrantfile

Choose an empty directory and create the following Vagrantfile:

Vagrant.configure("2") do |config|
    config.vm.box = "bento/ubuntu-18.04"

    config.vm.define "lb" do |machine|
        machine.vm.network "private_network", ip: "172.17.177.21"
        machine.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
    end

    config.vm.define "node1" do |machine|
        machine.vm.network "private_network", ip: "172.17.177.22"
    end

    config.vm.define "node2" do |machine|
        machine.vm.network "private_network", ip: "172.17.177.23"
    end

    config.vm.define "controller" do |machine|
        machine.vm.network "private_network", ip: "172.17.177.11"

        machine.vm.provision "ansible_local" do |ansible|
            ansible.playbook = "provisioning/playbook.yml"
            ansible.limit = "all"
            ansible.inventory_path = "provisioning/hosts"
            ansible.config_file = "provisioning/ansible.cfg"
        end

        machine.vm.synced_folder ".", "/vagrant", mount_options: [ "umask=077" ]
    end
end

First we define a load balancer (lb) node and connect it to private_network with an IP address. We also forward port 8080 in our host machine to port 80 in the VM, so we can access it through our browser.

Then we define two web nodes (node1 and node2) and join them to private_network with an IP address. These nodes have no port forward so they are not accessible through our browser.

Finally we define the Ansible controller (controller) that is going to be used by Vagrant to configure the other nodes. We join it to private_network with an IP. We use the ansible_local provisioner as discussed before, indicating that we want to run the playbook on all hosts (ansible.limit = "all") and indicate the path to the playbook, inventory and ansible.cfg files. Finally we override the default configuration for the synced_folder, using a umask to remove permissions from all users except vagrant. This is necessary otherwise both Ansible and ssh will complain for security reasons and fail.

Ansible configuration

Create a provisioning directory where we’ll place all Ansible related files. By default Vagrant autogenerates an inventory that is placed in the guest VM under the path /tmp/vagrant-ansible/inventory/vagrant_ansible_local_inventory, but since we have no name resolution we cannot use it. Instead create a hosts file under the provisioning directory:

controller ansible_connection=local
 lb         ansible_host=172.17.177.21 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/lb/virtualbox/private_key
 node1      ansible_host=172.17.177.22 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node1/virtualbox/private_key
 node2      ansible_host=172.17.177.23 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node2/virtualbox/private_key
 [nginx]
 lb
 node[1:2]

This file is telling Ansible how to connect to the hosts. It lists the IP addresses that we defined in Vagrantfile and the private keys to connect to every host. Vagrant places the private keys under .vagrant/machines/<machine name>/virtualbox/private_key paths. We also define an nginx group which consists of the load balancer and both web nodes.

The next file to create is the Ansible Playbook (playbook.yml) which tells Ansible which tasks to execute in which hosts:

---
- hosts: nginx
  tasks:
    - name: Install nginx
      ansible.builtin.apt:
        name: nginx
      become: yes

- hosts: node1
  tasks:
    - name: Copy hello from node 1
      ansible.builtin.copy:
        dest: /var/www/html/index.html
        content: 'Hello from Node 1!'
      become: yes

- hosts: node2
  tasks:
    - name: Copy hello from node 2
      ansible.builtin.copy:
        dest: /var/www/html/index.html
        content: 'Hello from Node 2!'
      become: yes

- hosts: lb
  tasks:
    - name: Copy nginx.conf to load balancer
      ansible.builtin.copy:
        src: nginx.conf
        dest: /etc/nginx/sites-enabled/default
      become: yes
    - name: Restart nginx
      ansible.builtin.service:
        name: nginx
        state: restarted
      become: yes
      

To keep it simple we go in a linear fashion: first use apt to install nginx in the nginx group from the inventory (lb, node1 and node2). Then we copy a welcome message to node1 and node2 (/var/www/html/index.html). Then override the nginx default configuration in the load balancer (/etc/nginx/sites-enabled/default) with the content of nginx.conf, and restart the service to load the new configuration.

Next, inside the provisioning directory, create a files directory and create the nginx.conf file inside of it:

upstream hello {
    server 172.17.177.22;
    server 172.17.177.23;
}

server {
    listen 80;

    location / {
        proxy_pass http://hello;
    }
}

This file configures nginx in the lb node as a load balancer for the two web nodes. It defaults to round robin.

Finally we’ll create the ansible.cfg file inside the provisioning directory to allow ssh to connect to the controlled nodes:

[defaults]
host_key_checking = no

You should have a directory structure like this:

It’s time to start it! Open a terminal where Vagrantfile is placed and enter:

vagrant up

Wait while Vagrant creates 4 virtual machines, installs Ansible in the controller node and runs the playbook to configure the load balancer and both web nodes.

Now go to http://localhost:8080 and you will see the welcome message from node1:

Reload the page several times and you will see the message change as the load balancer forwards the requests to node1 and node2 alternatively.

Wrapping up

Enter vagrant halt to stop the VMs and save some resources, or vagrant destroy -f to delete them, concluding this demo.

Vagrant is a very nice way to test with virtual machines. It can create a single VM or several VMs connected by virtual networks.

Integration with Ansible allows to test Ansible playbooks in your machine. It also supports other provisioners like Chef, Puppet, Docker and more, enabling the development of complex setups in a virtual environment, without the need for real servers.

Creating and provisioning virtual machines with Vagrant

Often we need to create local environments to test setting up new services. A great way to do it is by provisioning VMs with Vagrant.

What is Vagrant?

Vagrant is a cross-platform tool to automate the creation and management of VMs for development uses. You write the desired configuration for your VMs in a Vagrantfile and then use the vagrant command to start, stop and manage the VMs.

Installing Vagrant

Vagrant only automates the management of VMs. The VMs themselves are hosted by a provider like VirtualBox, VMware, Docker or Hyper-V. For this tutorial we are going to use VirtualBox on Windows 10.

Install VirtualBox

Vagrant requires a compatible version of VirtualBox. Download and install VirtualBox 6.1.16.

Install Vagrant

Download and install Vagrant 2.2.10. This version is compatible with the previously installed VirtualBox. Restart your machine.

Starting and provisioning a simple VM

First choose a working directory and then create a Vagrantfile by entering in a command line:

vagrant init bento/ubuntu-20.04

What we’ve done here is tell the vagrant command to create a Vagrantfile using the bento/ubuntu-20.04 box. VM images for Vagrant are called boxes, and you can find them here. I recommend to use the bento ones.

Let’s have a look at the Vagrantfile. Open it in an editor, I recommend to open the directory in Visual Studio Code:

The Vagrantfile is a description of how you want Vagrant to create your VMs. It’s written in Ruby, but you don’t need to know Ruby to edit it. The syntax is self-explanatory and has very good comments. If you removed all the comments this would be the entire file:

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-20.04"
end

It simply telling Vagrant to create a VM using the bento/ubuntu-20.04 box. It uses the Vagrant configuration format version “2” (the current one).

Let’s make it more interesting. Uncomment lines 66-69 by removing the leading hashes, they should look like this:

As the comments explain, this is telling Vagrant to run a script to install Apache in the VM. This provisioning script runs only the first time you start the VM. If you want to run provisioning in an already existing VM, you need to use the vagrant provision command.

Let’s also uncomment line 31 to tell Vagrant to map the VM’s por 80 to port 8080 in our machine:

Now start the VM! In the same directory where the Vagrantfile was created, enter in the command line:

vagrant up

You will see some logs and wait while Vagrant downloads the box and starts the VM, maps some ports and runs the provisioning script to install Apache. When it’s done, you can enter vagrant status to check that it was created correctly:

If you open VirtualBox you can also verify that your VM is running in VirtualBox:

Connecting to the Virtual Machine

Enter in the command line:

vagrant ssh

This uses ssh to connect to the VM. Now you are inside the VM in a Linux terminal:

Change directories to /vagrant and create a new file:

cd /vagrant
echo "Hello from Vagrant!" > hello.txt

Now open the file you created inside the VM in Visual Studio Code:

Vagrant by default enables file sharing between your host and the guest VM, synchronizing your working directory (the one where Vagrafile is) to the /vagrant directory in the VM.

And if you point the browser in your host machine to http://localhost:8080/, you will see the welcome page from the Apache server that is running inside the guest VM. Great!

Stopping and deleting the VM

To disconnect from the VM, press Ctrl + D or type exit at the command line. Now enter the following to stop the VM:

vagrant halt

If you want to delete the VM because you no longer need it or you want to start fresh:

vagrant destroy -f

If you didn’t provide the -f argument, it would ask for confirmation before deleting the VM. There are many other commands which you can discover by typing vagrant without arguments and by reading the documentation.

Conclusion

As you can see, Vagrant is a great tool for creating test environments in your machine. The real power comes from being able to share the Vagrantfile with other developers whom can then recreate the same environment in a Windows, Linux or Mac OS machine by just typing vagrant up.

In a next post we’ll explore a more advanced scenario involving several VMs connected via a virtual network.

Configurando Javadoc en Eclipse 2018-09 y Ubuntu 18.04 con OpenJDK 11

Si instalaste el paquete default-jdk en Ubuntu 18.04, por estas fechas (septiembre 2018) obtendrás el paquete openjdk-11-jdk. Es un poco confuso porque hasta septiembre de 2018 lo que en realidad instala es Java 10:

Captura de pantalla de 2018-09-22 18-32-27Sólo después de septiembre de 2018 se va a instalar Java 11 con el paquete openjdk-11-jdk.

También podrás comprobar que en Eclipse no está funcionando el Javadoc al hacer hover sobre las clases. Para solucionar el problema vamos a Window -> Preferences -> Java -> Installed JREs y seleccionamos el JDK “11”:

Captura de pantalla de 2018-09-22 18-48-29

Hacemos click en Edit, abrimos los detalles y seleccionamos Javadoc location:

Captura de pantalla de 2018-09-22 18-50-51

La URL es http://docs.oracle.com/javase/10/docs/api/. Asegúrate de usar http, no https. Puedes usar el botón Validate para comprobar la URL:

Captura de pantalla de 2018-09-22 18-53-22

También podemos configurar las sources para poder verlas desde Eclipse. Primero hay que instalar el paquete:

sudo apt install openjdk-11-source

Luego seleccionamos Source attachment en la ventana Edit JRE e ingresamos el path de las fuentes:

/usr/lib/jvm/openjdk-11/lib/src.zip

Captura de pantalla de 2018-09-22 19-25-54

Captura de pantalla de 2018-09-22 19-27-51

Captura de pantalla de 2018-09-22 19-38-24

De esta forma tenemos todo configurado y cuando hagamos hover vamos a ver el Javadoc y con F3 u Open Declaration las fuentes de la class library.

 

Problema de conexión Wi-Fi en Linux y laptops Lenovo

Si tienes una portátil Lenovo e instalaste o estás tratando de instalar Linux, puede que te hayas encontrado con desagrado que la conexión Wi-Fi no se puede activar. He observado este problema tanto en Ubuntu como en Fedora.

Para solucionarlo hagamos una prueba ingresando lo siguiente en la terminal:

sudo modprobe -r ideapad_laptop

Si funciona vas a ver que de inmediato aparece la conexión disponible. Una vez terminada la instalación de Linux, o si ya lo habías instalado, hacemos que este cambio sea permanente ingresando en la terminal:

echo "blacklist ideapad_laptop" | sudo tee -a /etc/modprobe.d/ideapad.conf

Los constantes problemas de Linux en el escritorio

Este problema se debe a un bug que ha permanecido durante años en el driver ideapad-laptop. Desafortunadamente este tipo de problemas sigue siendo muy común en Linux, impidiendo que se convierta en una buena alternativa, además gratuita, para el común de los usuarios.

Por eso es importante que como programador siempre tengas en cuenta que la finalidad de un sistema informático es facilitar las tareas al usuario, a quien poco le importa la belleza de la arquitectura ni la pureza de tu código si tu programa es difícil de usar y se convierte en un obstáculo más que en un facilitador para la solución de sus problemas.

Instalando un entorno de programación C en Windows con Mingw-w64 y Visual Studio Code

Windows no es un sistema operativo que cuenta con un buen soporte para la programación en C. Por razones que no quedan del todo claras, Microsoft se ha negado rotundamente a actualizar el soporte del compilador C de Visual Studio, el cual ha quedado estancado en el estándar C89.

Afortunadamente la comunidad Open Source viene al rescate con el proyecto Mingw-w64, que provee el compilador GCC, el debugger GDB, binutils, más las headers y bibliotecas necesarias para producir binarios nativos de Windows. Mingw-w64 es ampliamente utilizado y actualizado constantemente con las últimas versiones de GCC y tecnologías como OpenGL y DirectX.

Hay varias formas de obtener Mingw-w64, pero una de las mejores es usando MSYS2 (Minimal SYStem 2), una línea de comandos y entorno de programación que usaremos para instalar Mingw-w64.

1. Instalando MSYS2

Para comenzar, vamos a msys2.github.io y descargamos el instalador de 64 bits (el que dice “x86_64”) que nos permitirá compilar programas nativos de 32 y 64 bits. Abrimos el archivo y lo instalamos con todas las opciones por defecto. Si al final dejamos tildada la opción “Run MSYS2 now“, vamos a ver la consola de MSYS2 MSYS que luce de esta manera:

msys2-console

MSYS2 usa el gestor de paquetes pacman de Arch Linux para instalar y actualizar el paquetes. Hay que seguir las instrucciones en la página de MSYS2 para actualizar el entorno. Al momento de escribir este artículo (MSYS2 versión msys2-x86_64-20160921) hay que ingresar en la consola (atento a las mayúsculas):

Puede que en alguno de los siguientes pasos aparezca un mensaje pidiendo cerrar la consola. En ese caso debemos cerrar la ventana por la equis y volver a abrirla desde el menú inicio, buscando MSYS2 MSYS.

pacman -Sy pacman
pacman -Syu
pacman -Su

MSYS2 se instala por defecto en el directorio c:\msys64. En el menú inicio de Windows podremos ver que en la carpeta MSYS2 64bit aparecen 3 consolas, lo cual se presta a cierta confusión. Las tres abren una consola bash pero con diferentes variables de entorno establecidas para diferentes usos:

  • MSYS2 MSYS: es la consola de MSYS que utilizaremos para correr pacman e instalar paquetes. Se puede compilar programas aquí pero los mismos dependerán de msys-2.0.dll, una capa de emulación POSIX. Basta decir que no utilizaremos ésta para compilar.
  • MSYS2 MinGW 32bit: provee un entorno de programación para compilar programas nativos de Windows de 32 bits que dependen de MSVC runtime.
  • MSYS2 MinGW 64bit: éste es el entorno de programación que usaremos para compilar programas nativos de Windows de 64 bits.

2. Instalando Mingw-w64

Ahora sí instalaremos Mingw-w64. MSYS2 sólo provee la consola bash y otras herramientas auxiliares, para obtener el compilador debemos instalar Mingw-w64.

Al momento de escribir este artículo había un bug en MSYS2 (versión msys2-x86_64-20160921, corregido en versiones posteriores) por lo tanto debemos ir al directorio C:\msys64 y crear dentro un directorio llamado “mingw64“.

Para descargar e instalar el paquete, abrimos la consola de MSYS2 MSYS e ingresamos:

pacman -S mingw-w64-x86_64-toolchain

Con esto hemos instalado dentro de MSYS2 la “toolchain” necesaria para compilar programas, que incluye binutils, make, pkg-config, y el compilador gcc.

Podemos cerrar la ventana de MSYS2 MSYS, ir al menú inicio y abrir la consola MSYS2 MinGW 64-bit (no MSYS2 MSYS). Si todo salió bien, al ingresar “gcc -v” veremos la versión del compilador:

gcc-version

Finalmente agregamos la ruta de MinGW al path de Windows. Para ello abrimos una consola de línea de comandos (Menú inicio -> cmd) e ingresamos setx path “%path%;C:\msys64\mingw64\bin”:

path

3. Instalando y configurando Visual Studio Code

Una vez instalado el paquete mingw-w64-x86_64-toolchain, es perfectamente posible compilar programas desde la consola de MSYS MinGW 64-bit, usando la interfaz de línea de comandos para invocar comandos como cc y make.

Sin embargo, si queremos un entorno mucho más amigable, podemos usar Visual Studio Code, el IDE de Microsoft que se presenta como la alternativa ligera y open source a Visual Studio.

Debemos ir a code.visualstudio.com, descargar el instalador y seguir las sencillas instrucciones. Luego debemos crear un directorio en nuestro disco que utilizaremos para el primer proyecto en C. A diferencia de Visual Studio, Visual Studio Code se basa en directorios y no en proyectos, por lo que debemos ir al menú Archivo -> Abrir carpeta y seleccionar el directorio vacío que creamos.

A continuación abrimos la Paleta de comandos (Ctrl + Shift + P), escribimos Extensiones: Instalar Extensiones y presionamos Enter. Esto nos abre el Panel de extensiones en el cual vamos a buscar cpptools, la extensión de Microsoft que agrega soporte para C y C++. Debemos instalarla (haciendo click en Instalar) y habilitarla (haciendo click en Recargar).

En el Explorador de archivos a la izquierda, creamos un archivo llamado hello.c en el cual ingresaremos el código del clásico Hello World:

vs-code1

Guardamos el archivo (Ctrl + S), hacemos click para poner un breakpoint en la línea 6, y luego situando el cursor en la línea 1, que aparece subrayada en verde, vemos que aparece un foco amarillo al cual haremos click para configurar el include path. Esto crea en nuestro directorio un archivo c_cpp_properties.json que indica a la extensión cpptools dónde están los headers de C. Debemos editar la sección Win32 de dicho archivo y cambiar el valor del includePath al directorio C:/msys64/mingw64/include (ingresar tal cual) Guardamos el archivo (Ctrl + S).

c_cpp_properties

Para configurar la compilación del proyecto abrimos la Paleta de comandos (Ctrl + Shift + P) y buscamos Tareas: Configurar ejecutor de tareas, seleccionando la opción Otros. Esto crea un archivo tasks.json que editaremos de la siguiente manera:

tasks

Guardamos el archivo (Ctrl + S). Lo que hemos hecho aquí es decirle a Visual Studio Code dónde encontrar el compilador cc que instalamos previamente en MSYS2. Los parámetros indican agregar símbolos de depuración para gdb, el nombre de nuestro archivo de fuente y el nombre del archivo de salida.

Para compilar presionamos Ctrl + Shift + B, si todo salió bien veremos que nuestro ejecutable hello.exe aparece en el explorador de archivos. Sino aparecerá un mensaje de error en el panel Salida y debemos verificar si configuramos bien la ruta y los parámetros.

Finalmente ejecutaremos nuestro programa usando dbg, el clásico debugger de GNU. Debemos abrir la barra lateral de depuración (Ctrl + Shift + D o haciendo click en el insecto), luego hacer click en el icono de la rueda para configurar, y seleccionar la opción C++ (GDB/LLDB). Esto crea un archivo launch.json que editaremos para indicar las rutas de nuestro ejecutable y del debugger. En la configuración C++ Launch hay que editar la propiedad program y agregar la propiedad miDebuggerPath con la ruta de gdb, como se muetra a continuación:

launch

Guardamos el archivo (Ctrl + S), y presionamos F5 o hacemos click en la flecha verde para ejecutar. Si todo salió bien, debe aparecer la consola de nuestro programa y veremos que Code se detiene en el breakpoint que establecimos anteriormente. La barra lateral muestra las variables locales, una sección de inspección y la pila de llamadas. Abajo aparece la consola de depuración que permite enviar comandos a gdb, y arriba está la barra para controlar la ejecución, con los mismos atajos de teclado que Visual Studio:

vs-code2

Como vemos, Visual Studio Code es un entorno muy agradable de usar y bastante completo, ya que a pesar de ser nuevo (apareció en 2015) se ha desarrollado con bastante rapidez. Perfecto para aprender y dar los primeros pasos programando en C bajo Windows.

Próximos pasos

Hemos configurado Visual Studio Code para compilar, ejecutar y depurar de forma visual y sencilla un programa con un sólo archivo fuente, usando directamente el compilador cc. El próximo paso que podrías seguir es editar el archivo tasks.json para usar make en vez de cc y poder compilar programas más complejos (en MinGW el ejecutable se llama mingw32-make.exe). ¿Puedes descubrir cómo?

El rol del idioma inglés

¿Se puede ser un buen programador sin hablar inglés?

Es una pregunta interesante. Supongo que sí, pero en mi experiencia personal el no saber inglés hubiera resultado una desventaja importante.

El hecho es que el inglés, por distintas razones, es el idioma universal de las ciencias y los negocios, como en un momento lo fue el francés. El inglés te abre las puertas a un mundo de 1500 millones de hablantes que producen las innovaciones científicas y tecnológicas más importantes.

Las últimas investigaciones, estándares, documentaciones y libros se publican en su mayoría absoluta en inglés, y lamentablemente muchos nunca son traducidos. Si quieres participar en un equipo de trabajo internacional, en un proyecto open source, un foro o una lista de correos, el inglés también es un requisito fundamental.

La dificultad del inglés

¿Es dificil aprender inglés? Yo diría que sí, porque al ser un lenguaje germánico tiene muchas diferencias con el español en su sintaxis, semántica, estructura de la oración, conjugación de los verbos… Aprender inglés es aprender otra forma de pensar.

Ni hablar del “slang“, esas expresiones propias de cada idioma y cada región que no tienen una traducción literal, como hanging out – ¿¿colgando afuera?? No, pasando el rato –. Sólo se pueden ir aprendiendo con el tiempo y la práctica

Tal vez lo más complicado del inglés es la pronunciacion. A diferencia del español, el inglés no tiene una pronunciación fija, sino que el sonido de una letra cambia dependiendo de las letras que la preceden y la siguen, así que no queda más remedio que ir aprendiendo de memoria las cientos de combinaciones posibles. Para complicar más las cosas, la pronunciación varía mucho entre el inglés americano y el británico, e incluso entre distintas regiones de un mismo país.

Por otro lado, hay cosas que son más fáciles que el español. Por ejemplo, no existe la tilde, así que no tienes que preocuparte por escribir bien los acentos. Y la mayoría de las palabras son de género neutro (the house, the sun, the sea, the city), otra cosa menos que memorizar.

Recursos para aprender

Por suerte son muchos los recursos que hay para aprender. Lo mejor son los cursos presenciales. Una parte importante de mi aprendizaje fueron los cursos que recibí como parte de mi educación secundaria y universitaria. El gobierno de la ciudad de Buenos Aires ofrece cursos gratuitos de idiomas en el programa Lenguas en los Barrios, y hay infinidad de academias de idiomas, muchas dedicadas exclusivamente al inglés.

También hay muchos cursos online gratuitos que puedes conseguir con cualquier buscador y son muy buenos. La desventaja es que no tienes un profesor que te guíe y corrija en cosas como la pronunciación, y además requieren mucha disciplina para seguir el curso hasta el final.

Otra opción muy buena son las aplicaciones como Duolingo, el cual recomiendo altamente, ya que es excelente para aprender la pronunciación y memorizar las palabras. Duolingo está disponible para iPhone, Android, Windows Phone, y además la página web tiene funciones extra como explicaciones y tablas de conjugación que no están presentes en la aplicación móvil.

Todo complementado siempre por la práctica constante. La lectura, la música y la televisión ayudan muchísimo a la memoria, por lo que es imprescindible incorporarlas a un aprendizaje integral.