We truly love Scala. Scala has been one of the pillars of Codacy from the start. Joao, who has an Enterprise Java background, has loved the language from the beginning and we’ve since taught Scala to new people joining in the company.
Initially, we wanted to have the expression power of dynamic languages without losing static type safety. We placed our bets on Scala and, judging from adoption rates, we believe this was a good decision. The type system is turing complete. This should say enough about its potential.
Scala has also helped us with hiring; typically, if someone is interested in working with Scala, they will probably be a good fit for us as a company. This is because a special spark in a engineer is needed for him/her to be curious and interested in working with new languages. Having said that, Scala is no longer a new language. Its adoption has been growing steadily and fueled by the big data movement and platforms such as Spark.
When defining Codacy’s architecture, we knew that the system we would have to build to analyze millions of lines of code per hour and would have to treat scalability, distribution and concurrency as first class citizens. We were already sold on Scala, since as I mentioned, we’ve been fans of the language for a long time. The process of choosing Play and Akka was simple: both are built for massive operations and have the backing of a strong growing company such as Typesafe. We treat every single operation as an immutable task for an actor. From parsing code, to dealing with code repositories, to applying code patterns to ASTs: everything is a task for us that we distribute to pools of actors. Akka excels at this job.
Now, we’re in a position where our current infrastructure demands further decomposition into services and that we break a monolith. Because we’ve modeled our system already in components, this has been easy from the architecture standpoint.
We are especially excited about the great work from the Scala team on Abide, Scala-Meta from Eugene Burmako and his team, and about the TASTY spec announced by Martin Odersky. It has many things we’ve been looking for to support customers interested in deeper code analysis.
Scala has been a great decision for us and it’s paying off in many ways.
A few weeks ago the Bitbucket team approached us to become one of the first Bitbucket integrator on Atlassian Connect.
Today we are happy to announce that the Codacy add-on is available on Bitbucket!!
Atlassian Connect for Bitbucket?
Atlassian Connect for Bitbucket is the new (r)evolution for Bitbucket, which is used by over 3 million developers worldwide. It offers developers the possibility to integrate their tools directly and hassle-free into Bitbucket.
Why does it matter?
Developers are spending more and more time switching between tools instead of focusing on what really matters: coding. Atlassian Connect for Bitbucket removes the barriers to integrate and finally unify the tools into the development lifecycle.
At Codacy we are very excited about this new fully integrated Bitbucket. As a provider of an automated code review software our mission is to save time and frustrations out of the code review process. Being able to provide our results on the tools that our users are already using is a priority for us.
Today we are also happy to announce that Pull Requests will be fully supported on Bitbucket. We are already working on a few exciting features to complement this add-on. Stay tuned, add Codacy to your Bitbucket projects, and consider creating your own add-ons!
Hello there, my name is Sandra Wolf and I’m the new team member of Codacy (yay!)
I have joined on the position of Developer Advocate, I will help you with all your #CodeReviews problems, and together we will take care of your technical debt. We will meet during tech conferences, meetups, user groups etc. I’m planning to be everywhere! I would love to help you make you as successful as possible using Codacy, and take advantage of everything that Codacy has to offer and make sure that you have fun using Codacy.
At codacy we don’t only rely on external tools to find code patterns.
Especially when it comes to Scala we also have a nice repository of homemade patterns. Therefore we need to be able to process Scala sourcecode. And by processing we mainly mean transforming source code into Abstract Syntax Trees (ASTs).
Everything you find here you can try yourself in the repl. Be sure to add the following sbt resolver and dependencies
(Not yet deprecated) scala.reflect
Up until now we used the standard scala.reflect.api and the toolbox it provides. To parse scala code with help of the reflection toolbox we need to run the following:
The resulting tree variable is of type
toolbox.u.Tree and this is the structure we use(d) to find problematic code. That doesn’t look too complicated you might say. Well you are right but the problems start here. Unfortunately those Trees are very complex (after all they represent the complete AST of your program). Also documentation is kinda poor – as writer of patterns I had to rely mainly on Scaladoc and partly Reflection.
Sidenote: Real life .scala files usually reside in packages which means a typical sourcefile will start with
package foo ... The current implementation of the toolbox however is not able to parse such files. We were forced to implement a workaround that preyed on my mind ever since.
All of this doesn’t really make it easy for someone who doesn’t know the internals of scala.reflect.api to start writing patterns from scratch. But that’s exactly what we want. We want to make it possible for our users to write custom patterns for their issues.
scala.meta to the rescue
Recently we started a very promising collaboration with Eugene Burmako and Mathieu Demarne from scala.meta. scala.meta was created to simplify metaprograming in scala and will hopefully be the successor of the current scala.reflect. For us it already is. Let’s try to get a
At Codacy, we use Scala for all our core, complemented by Play Framework and built with sbt.
sbt is a great build tool, specially if you want to focus more on coding than compiling :D.
Earlier this month we decided to do a complete separation of our application.
Although we already had several different sbt sub-projects we had to define some structure and rules to keep them in sync.
With all the simplification and ease of creating a project comes great responsibility, we want our build to be kept simple and easy to maintain.
Updating a large table in Postgres is not as straightforward as it seems. If you have a table with hundreds of millions of rows you will find that simple operations, such as adding a column or changing a column type, are hard to do in a timely manner.
Doing these kind of operations without downtime is an even harder challenge. In this blog post I will try to outline a few strategies to minimize the impact in table availability while managing large data sets.
Working on a startup is hard. You get a limited amount of resources and a limited amount of time to create a “temporary organisation designed to search for a repeatable and scalable business model” [source]. And the search is fast. Really fast. So fast that most will not find* that business model.
*This is also referred as a failed startup; I think that this ‘failed’ terminology is something we should not use, as it is quite far from reality, but this would be another blog post by itself.
Before having seed investment, it seemed easy; although having no money, we would be hacking something in our free time, trying to get our vision into a product, something that we could show to others and get feedback from. Having no costumers, meant we could rapidly prototype and start over.
It was fast. Really fast.
Then, one day, you have a product. You have a costumer. And you have investment.