Friday, March 4, 2011

Showing the current activity of the app from a notification in Android

Showing an error message from a long running background thread is easy on Windows (you can get the topmost window and show above that, or show a message somewhere in the main window). On Android it is recommended to alert the user with a Notification. Nonetheless, I tried to show a dialog from the background thread. But because AlertDialog.Builder.create gives an exception with the application context, even if it is called from the thread of getMainLooper, I've given up.

Here is the error notification I ended up with. The trick for showing the current activity is Intent.CATEGORY_LAUNCHER.
/**
* Shows an error message as a notification.
*/
private void showError(int notificationId, String text) {
// see the help for the Application class,
// for how to implement getInstance
Context ctx = MyApplication.getInstance();

Intent intent = new Intent();
intent.addCategory(Intent.CATEGORY_LAUNCHER);
intent.setAction(Intent.ACTION_MAIN);
intent.setComponent(new ComponentName(ctx, HomeActivity.class));

PendingIntent pendingIntent = PendingIntent.getActivity(ctx, 0, intent, 0);

Notification notification =
new Notification(R.drawable.launch_icon, text, System.currentTimeMillis());
notification.flags |= Notification.FLAG_AUTO_CANCEL;
notification.setLatestEventInfo(ctx, text, null, pendingIntent);

NotificationManager nm = (NotificationManager)
ctx.getSystemService(Context.NOTIFICATION_SERVICE);
nm.notify(notificationId, notification);
}

Monday, February 28, 2011

Open source, a marketing campaign?

Why are big companies contributing to open source? I.e. Linux, Android, Eclipse? Are they doing it because they don't have money for Windows, Visual Studio, Intellij Idea?

I think that they're doing it because of the marketing gain. IBM wins a contract easier when the client sees that they're developing the IDE too. And IBM has to invest just a little bit into Eclipse, because there are other "fools" that do the rest of the work for free/just for reputation.

Similarly it's a good marketing campaign for hardware manufacturers to invest a little bit in Linux development. System administrators, who recommend what equipment to buy, are "bought" by this marketing.

Or for Google to invest a little bit in Android. Tough they now do most of the work, probably they hope that later 95% of it will be overtaken by volunteers, and they'll remain with the marketing benefits.

The story is the same with the donations/charities. You, as a company, buy the members of the foundation to who you give charity. So, as individuals, they'll buy from you, instead of from the competition. And they'll recommend you. And you also buy the ones who just find out about your donation and like the cause you donated for.

Thursday, February 17, 2011

How can smart people write spaghetti/duplicated code?

By everybody working on everything. And by not having similar tasks assigned to the same developer (i.e. 3 screens which differ just a little). By assigning all the layers of feature 1 to developer 1 and all the layers of feature 2 to developer 2, and by implementing them concurrently, without having any rules/"model code".

Usually layer 1 of feature 1 (L1F1) is very similar to layer 1 of feature 2 (L1F2) and layer 2 of feature 1 (L2F1) is very similar to layer 2 of feature 2 (L2F2).

Having developer 1 work on L1F1 and developer 2 work on L1F2, both of them need to understand/write-it-well the code for layer 1, and they need to do this in half the time the developer 1 needed to understand/write-it-well, if he would write layer 1 for both features. And understanding/organizing a code well takes time.

Saturday, February 12, 2011

Outsourcing communication problems

When you work in the same office with someone, and you chat every day, when you have a question you just ask it, you don't care if it's a silly question or not. I think that this is because if 90% of what you talk is chatting, and 10% is asking questions, and all of them are silly, you'll make just a 10% silly image of yourself. Plus you see the reaction of the collegue (which may not be 100% accurate), and see if you should ask any more of these silly questions or not.

Contrast this to when you have to ask questions from a collegue in another country. You don't have that many common themes to chat about, and even if you want to chat, you don't know the reactions, if he wants to chat or not. And writing is a little bit harder than speaking anyways. When you ask the question you don't see the reaction either. So you don't know if he is upset that you disturb him with these questions. And if the questions are silly, you don't have the chat to raise your silly image.

So, I think that rather than having a lot of developers ask questions from somebody in a different country, it would be better if there would be a local architect who would know the spec and the architecture, and ask the PM about everything. After the architect talked 100 hours with the PM, he would not dismantle his image that much by asking a few silly questions. And the PM would not dismantle his image that much either, by telling the architect to figure out the response to this question himself. And the developers who are chatting with the architect could ask him everything without inhibitions.

That is, minimizing the interface between separate countries should be a goal as minimizing the interface between the modules assigned to different programmers.

I know that having the architect in the PM's country is for having better communication between the two, that is, for the architect (who knows the state of the code) to don't be afraid to tell anything to the PM. But this works only if the architect's main task is to do the architecture (know the spec and design the interface between the modules), not to write code. If you have a remote architect who is more concerned about writing code, a developer will have inhibitions about asking him too much, because he doesn't know whether he cares about those questions or not.

Monday, February 7, 2011

The curse of rigid deadlines

Suppose that you have to write a client to a server. You have a spec with missing details. The person who you can ask details about the spec is on vacation. You start to code. You make an architectural mistake by misunderstanding a vague part of the spec. You wait to be given a user name and a password for the server. 2 weeks pass. Meanwhile you code. Finally access to the server. You find out from trying the server, about your architectural mistake. Now it takes the same time to redo the arhitecture or to continue with the existing one. You decide to continue with the existing one. The project manager moves to another project. A new one arrives.

Some unplanned problems, unrelated to the architecture, occur, which slow down the project more. No server access for some time, again. Milestones come and go. They're missed, but the end date of the new ones remain the same. You're stressed, that you won't complete the project in time. You are slow because this is your first project in this language. Not all the features are assigned to people, so you cannot plan ahead (do things that will slow down at first, but then speed up afterwards). You cannot work well because of this stress. The project slows down even more. You cannot take a vacation to free from stress, because that will slow down the project even more.

Here is another opinion about deadlines: blog.jayway.com/2010/08/12/no-deadlines/


Whole project deadlines may not work even if they are 100% accurate. The problem is, that a programmer does not know in how much time can he do a task that he has never done before. Deadlines are helpful only in repetitive tasks. And even then, only when they are very close to the achievability level. A factory worker who makes shirts and knows that he made 38 in the first 4 hours of the day, and he has to cut 80 that day, is motivated to achieve that target, by speeding up for the next hour and making 11 that hour, and having a feedback that he is now closer to the target.

Too aggressive deadlines are bad because of the stress of thinking what will happen to me because I miss the deadline, plus the fact that I try to take shortcuts (making the program less testable) that prove to be wrong (more debugging time, because the bugs are much harder to find).

Friday, February 4, 2011

Why I don't like programming

I've chosen to be a programmer because I thought that programming is mostly thinking about how to solve a problem in a language that I know. That is, that I have to think, not memorize. I.e. my brain being more like a CPU than RAM. 90% CPU, 10% RAM.

And it proved to be the reverse. Programming nowadays means 90% RAM, 10% CPU. That is, I spend 90% of my time learning APIs, and only 10% on real thinking, i.e. on algorithms.

It resembles the boring classes from school, i.e. the history class, that was about memorizing the dates and places when/where a former romanian ruler fought with the turks. It's nothing like the math classes, where you learned formulas in 10% of the time, and thought about resolving problems in the other 90%.

That is, you don't need to be smart to be a programmer, you just need to have a lot of RAM. A good programmer is not a smart programmer, but one who has memorized the API that the project uses.

And nowadays APIs are big enough to take 1-2 years to learn, and after 1-2 years there is a switch to another API. So, it becomes a perpetual struggle to memorize. By the time you'd become good/productive in an API, and you'd be proud that you can do something fast, it's gone.

The other option, to don't switch APIs often, isn't viable either, because you'll get jobless in a few years.

So, programming being about algorithms, or being fun, is just a school-myth.

Thursday, January 6, 2011

OOP, a marketing lie?

In commercials, we are told a lot of lies. For example, some ads say that carbon has 10 times better strength to weight ratio than aluminum. Yet bicycles made from carbon, though not having even twice less weight than aluminum ones, break into pieces more often. The ads forget to tell that they tested the strength only in one direction, and the test has no relevance in reality.

We are told that OOP makes software easier to develop, document and maintain.

But, contrary to this, I find that when doing OOP, I need to think a lot more when I start to write new code, where should I put this code, and how the classes should be organized. In procedural programming, the organization comes naturally, I start with a procedure, and add subprocedures until it is finished.

So, what's the reason for this seeming contradiction? Why is almost everybody using OOP?

The reason for OOP is not because we can write a 3 man-year OOP program easier than a procedural one. For these small projects, I think that procedural programming is faster.

The real reason for OOP is procedure names in big APIs. If .NET would have a procedural API, they've had a lot of headache coming up with good procedure names. And even if the API writers would come up with some names, a huge flat/procedural API would be harder to use than an object-oriented one. (The user would need to learn a lot more names, procedure names, instead of class names.) While the APIs were small (i.e. Turbo Pascal), it was easier to come up with good procedure names, than coming up with classes.

The whole story is similar to that of the file system. That is, DOS 1.0 did not have directories, because they were an unneeded complication, when the files were few.