PGDI at SERPRO

Modality
Technologies ,
Platforms
Roles
Period
Customer ,

In March 2010, I joined the Federal Data Processing Service (SERPRO), a public company of the Ministry of Finance in Brazil. This entry was made through a disputed public contest, in which I won my first permanent position in a highly relevant federal contest.

SERPRO is the largest public information technology company in Latin America, with over 7,000 employees. For 50 years, it has been modernizing the Brazilian State with strategic solutions for the country.

Right at the beginning of my journey at this company, I had the opportunity to take several training courses, among which the most important was training in the Demoisele Framework. The framework in question, implements the concept of integrating framework, has a range of tools and projects to facilitate the construction of applications, minimizing time dedicated to the choice and integration of specialist frameworks, which results in increased productivity and guarantees the maintainability of the systems. .

This was the largest organization I had worked with so far, a public company with a national reach, with teams spread across several regional offices and with thousands of developers. On that opportunity I was able to learn a lot about Design Patterns, Serpro Software Development Process (PSDS), configuration management, good practices for software testing, among others.

The first project I worked on was the Inscription Debt Generator Program (PGDI), and the module I worked on was responsible for importing files with information about debts to be registered in the National Active Debt. The PGDI was part of a family of systems that SERPRO developed for the Attorney General of the National Treasury.

This development was done using the Java programming language, Demoiselle Framework, Design Patterns such as Delegated and Faced, among others. I remember well that due to the Design Patterns it was necessary to create excessive classes and interfaces, but in fact the code was much prettier and more organized.

During this period, I needed to make some improvements in the specification of the import file, as it did not cover some special situations.

As these files were received from large systems, and at the time JSON had not even been standardized, we needed to define a format for separating the fields. The CSV format was not sufficient as the file had a hierarchical structure. The XML format was not recommended due to the large volume of data.

To solve this problem, short and very specific strings of characters were specified to be used as separators, whether these are field separators, subfield separators and line separators.