Friday, April 05, 2019

Deploying Angular 6 + Spring Boot + Postgres to AWS

You have completed your Angular application and now you are looking for some deployment options... Among many in existence today, you can find choices like AWS, GCP, Azure, PCF, etc. and few other in the cloud. In this post, I will explain what I needed to do to deploy my services to AWS and keep the cost low (or non existent with AWS free tier). There is always an option to get full 'by the book' services and pay for those, but it is better to understand what your options are when deploying the application and then understand how your revenue is going to go against your expenses. As it is, My Open Invoice application is now designed to operate for a single contractor. To add another company a new URL and setup is needed. This can, of course, be upgraded with a few tweaks in the data model and the application itself. We can leave that to some other blog, though. For now, let me introduce you to my simple architecture:
  1. Front end of the application is Angular 6 built on Google Material design capable of rendering on both desktop and mobile devices
  2. Middle tier is Spring Boot 2
  3. Database is Postgres (H2 as test database)
  4. Application has cache that is currently used only for RSS feed (EH Cache)
  5. Authentication and authorization are done through JWT with Bearer token.
Amazon offers several options for hosting your web application, like EC2 instance and S3 bucket. There is then CloudFront which is used to cache your content (dist folder) and front is with HTTPS access. S3 has durability and redundancy, Route53 has DNS covered and RDS has your database. Elastic Beanstalk is used for EC2 instance generation, auto scaling and Load Balancing setup. CloudWatch is used for log tracing and then there are several other options that you can turn on for multiple instances, load balancing, reporting, caching etc. 
My goal here is to create something that won't drain your money away but still give you a decent application with the ability to have backups. I will also mention the option that would be needed for more robust solution.

This is my current setup:


Let us start by explaining components that I have used:

  • Angular 6
    • This is a client facing application that is using a template that is both desktop and mobile app friendly. This app is compiled into /dist directory with production setting and code optimization, and uploaded to S3 bucket.
  • S3 Bucket
    • This is standard AWS S3 bucket in one zone. You can create zone replication but Amazon already guarantees 99.999999999% durability on single zone in case you worry about your data. You could even go to S3-IA for lower cost, but storage is very low for hosting one small web app anyway. This is going to host two items for us:
      • Angular 6 code (you can keep this private with access only to CloudFront)
      • Beanstalk repo for WAR files 
  • CloudFront
    • This service is our fronting service to enable routing to /index.html, content distribution closest to your customer location and HTTPS certificate termination and routing for our Angular app. There are few important items here worth mentioning:
      • you need 400 and 404 (at least) routing enabled to /index.html 200 HTTP code or your app will not work properly
      • you can provide your certificate here if you have your own domain registered (you can use Amazon ACM to generate and provide certificate here)
  • CloudWatch
    • This service is enabled by default and will track and report your usage for your AWS components. Max resolution is 1 minute. Depending on configuration and logging amount different charges may apply. Modest logging should remain relatively cheap or non existent. Here is the pricing.
  • Elastic Beanstalk
    • This is your PaaS. You will use this as an entry point to create your configuration, environment, VPC, load balancing, auto scaling groups, RDS, etc. This can be done in an easy and convenient way. You can save hours of work by deploying this way. There are a few important items to consider:
      • I have created my database separately from this config. There is a difference if you want to use Tomcat (e.g.) to manage your DB connection or your application. There is also more flexibility to configure RDS individually. 
      • I am using Nginx as a reverse proxy and Tomcat 8 to serve content (WAR). One important item is that since I am using Spring Boot application, you need to pass in -D style properties to overwrite Spring ones and NOT environment ${} variables. Took me good 1 hour to figure what 'environment' means in Beanstalk.
      • I did not turn on load balancing as this costs money even in free tier. You can alternatively load balance with Route53, but you will need to connect directly to EC2 instances and this limits auto-scaling option.
      • If you want to increase instance numbers, I did not find an option to change auto-scale Beanstalk other than enable LB or Time-based Scaling. But, you can also log into auto-scaling directly and increase Max instances to your desired number and also configure trigger for activation. This, however, will not help a lot as you would need load balancer to connect to these. The only other option would be to have ElasicIP address on EC2 instances and DNS balance on those, but I honestly did not try this.
      • When deploying WAR file, you need to create custom 443 port nginx.conf in addition with uploaded certificates (I got mine from this SSL site for free). You will need .ebextensions in WAR file with all configurations and certificates. EC2 config is rebuilt on every restart so you will lose 443 port if you do not have this enabled. This is ONLY needed if you do not have LB. Otherwise, LB will take care of 443 port for you (after you configure it).
      • You need to open 443 port to your EC2 security group (one will be created by EBS if you already do not have one). This needs to be accessible from 0.0.0.0/0. Your Angular app will connect directly to servers using this rule.
    • RDS
      • My choice of RDS is Postgres for prod and H2 for dev. On the side note, I was amazed how fast and compatible H2 is with SQL standards in terms of functions (I notice this every time :)). Postgres was closest for some custom queries for capability that I needed (compared with e.g. MySQL). RDS was created in only 1 zone with minimal sized instance, and security group was opened from EC2 to this group by name for desired port. RDS access was limited only to EC2 and if needed access can be done directly for DB management through port forwarding.
    • Route 53
      • I have registered domain through Amazon and AWS created a Hosted Zone for me. Inside, I have created ALIAS record  pointing to Beanstalk for both www., api. and naked domain name. If you choose option to point directly to EC2, you can do that, and you can always choose load balancing using DNS (though this is not primary balancing method on AWS).
    • Auto Scaling Group
      • This is created by Beanstalk and in general, you have two options. One is for the single instance and one for the load balanced. Again, load balanced instance will cost you so use only if you need it.
This is just one way of setting up you environment and for a small application it works for me. I have all the backups so I am not too worried about my downtime if any.

As suggested, there are a few items to change if you need more robust environment. Couple of them would be:
  1. Enable LB on Elastic Beanstalk and have your system balance across at least 2 zones.
  2. Have at least 2 zone deployments, but depends on your clientele, hosting in each major zone would be beneficial (and then use Geolocation/proximity to optimize content delivery)
  3. Right now, I have everything deployed in one VPC, but depending on your needs and layout, you may want more than one and then deciding which would be private and public and where to use Gateways, and how to connect VPCs.
  4. API Gateway is always and option for more complex environments where HTTPS also can terminate (with your certificate). This adds a layer of abstraction if you are using microservices from multiple points and with various infrastructure.
  5. RDS needs to be deployed on Multi-AZ with read replicas enabled. Backup is already enabled even for 1 instance. A possibility exists to use NoSQL database that is automatically deployed into Multi-AZ (like DynamoDB).
There are many ways things can be configured on Amazon, so it is worth a while to investigate and try it out. It is not expensive to test configurations to figure out what is the right one for you, but it may be expensive to try to correct everything down the road once you realize that it was not set up properly. Amazon offers option to pay for only what you use, so why not try it?

So far, my costs were for registering my domain and a small Route53 charge ($0.50 per hosted zone).

If you have a better setup in mind, please let me know. I am always trying to learn new and better ways of optimizing data, infrastructure and cost.



Tuesday, March 12, 2019

Number formatting in Angular 6 (multirow form)

Working as a contractor made me build an application that I use to generate my invoices and track my activity. But why build an application where there are other commercial solutions, you ask? Well, I needed a way to track my activity other then Excel like tools (which most of my colleagues use), I did not want to purchase other tools that may be available as they did not have format and reports that I needed (although some of them are really good), and most importantly, it is fun to build stuff. As developer, I created my own solution that you can get from my GitHub. Part of the solution is in the BitBucket as I needed to develop Angular based solution on a private repo. This is because I purchased an excellent Angular starter template called Egret that has some license limitations regarding sharing. It offered me a framework to work with with features such as sample forms, basic security, routing, animations and etc. I modified all of these to suit my needs.

One of the issues I had is that I needed to format numbers in multi column/row for all edit boxes for timesheet entry. I had to spend a bit of time to research how this is best accomplished but landed on a specific solution with a few modifications of my own.
I needed to:
  • Format on blur and focus events
  • Format on form init without blur and focus events
  • Format and calculate fields like totals on every field update with these fields being read only
Here is the number-format.directive.ts that will take care of the bluer and focus events:


The second part is that will take care of the form init and totals:

Here is the final look and feel:

A minor item to add to this would be to only allow increments of 0.25, or to round up number to quarters.

Sunday, September 30, 2018

Spring Boot (1.5) OAuth2 Server in Enterprise environment


The problem


If we want to have an array of microservices and support user interaction through delegated authorization, this implementation would be one of the options to consider or at least review. We have to first understand the differences between OAuth2 and e.g. OIDC before we continue explaining how to achieve OAuth2 implementation in Spring Boot in a way that is stateless and integrated so many microservices can rely on this implementation.  OAuth 2.0 is the industry-standard protocol for authorization where OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner. This does not mean that OAuth2 cannot do authorization, it is just that OIDC is better suited for this work.

We are going to focus on OAuth2 implementation in this article. There are several good references for reading that you can look for before venturing to create your own implementation (one, two, three, four, five, six). My reasons to do this were several, like not having OAuth2 service available, need for microservices in our architecture, uncertain structure of the client-server architecture and several other. Since we already worked with Spring Boot implementing this solution was the next best thing we could do to move our project further towards our goals.

There are two ways we can handle tokens in OAuth2 and those are plain token and JWT token. For our purposes we chose plain token implementation. The difference was that plain token needs to be verified by OAuth2 service every time it is accessed and JWT can be stored in the resource and verified by the public keys provided. Either way we can have a working solution but implementing plain tokens was a simpler and faster way to go.

Our requirements were to provide solution for the stateless authentication/authorization for the we client, server had to have a small footprint, had to be scalable, we had to see the tokens/users that we generated, we had to have revoke token capability, we had to provide automated solution to obtain the token to integration testing, microservices had to be able to both authenticate themselves and take user authenticated tokens, we had to have ability to connect to LDAP or database and ability to later support SSO from third party provider. There is always an opportunity to implement different solutions but this was something that could potentially play well into the future banking architecture and plans.


Server configuration


To start off we chose Spring Boot OAuth2 and created spring boot application. There were several configurations that we needed to implement to make our server to perform authorization and authentication.

  • WebSecurityConfigurerAdapter (to define /login, /logout, swagger, filters - we also can use @EnableOAuth2Client to additionally configure SSO client)
  • GlobalAuthenticationConfigurerAdapter (to define user details service, BCrypt password encoder and init method to distinguish between LDAP and database source). This adapter was needed as there are several filters to read users depending on the flow invoked. 
    • auth.ldapAuthentication() was starting point for LDAP
    • auth.userDetailsService(...) was starting point for user details service and password encoding
  • LdapAuthoritiesPopulator bean for LDAP custom authorities (used repository to load authorities based on user authentication)
  • AuthorizationServerConfigurerAdapter (to define OAuth2 server infrastrusture, including custom SQL queries (as we needed DB access for stateless solution across servers). This included tables like oauth_access_token, oauth_refresh_token, oauth_code and oauth_client_details. Tables are used depending on the flow invoked. Action involved overriding TokenStore, ClientDetailsService, AuthorizationCodeServices, configure(AuthorizationServerEndpointsConfigurer endpoints), configure(AuthorizationServerSecurityConfigurer security), DefaultTokenServices and configure(ClientDetailsServiceConfigurer clients) - with @EnableAuthorizationServer.
  • ResourceServerConfigurerAdapter (to define adapter that will server as entry point and configuration for any custom APIs) - with @EnableResourceServer.
  • We also needed to expose API for the user verification where we publish Principal object (this will be used by the microservices to obtain user details)
It is very important to note that adapter ordering is extremely important and that you may loose a lot of time investigating why something is not working just because of this. The order (lowest to the highest) should be Web (needed for authorization_code and implicit and stateful - because of the login page and authentication) -> OAuth2 (needed for all grants but stateless for password, refresh_token and client_credentials grants) -> Resource (needed for APIs).

We opted to use authorization_code without secret and refresh token for user authentication, client_credentials for the server and microservices that needed to authenticate themselves and password grant for the integration test cases (as this is the easiest way to obtain the token for specific user). Our client_credentials added a default role for the client and the rest added a default role for the user. This way, every authenticated client/user will have a default role to start with. This was a sure way to distinguish between human user and server API. The one problem that we still needed to solve for is propagation of the tokens in the layers of services. It is not a good practice to propagate same token between different horizontal layers due to the loss of the identification of the service doing the authorization.


Client configuration


Before we start with the microservices it is important to say that all services are defined as resources (using @EnableResourceServer annotation). This automatically means that we can for one identify a resource and enable its usage to the client for the OAuth2 configuration, and second, we can setup verification URL for the token. In order for any microservice to identify itself, we have two options for the configuration. First in the application.yml and the other programmatic (in case we need to obtain the token itself). For the first option it is useful to use it on any service that needs to implement verify_token, that is, whenever we receive the token our API will send the request to validate the token and populate user details in the SecurityContext in the spring. This is achieved in the security.oauth2.client and security.oauth2.resource entries. There we have to specify our given client id, secret, verify_token URL, resource id, user details URL and a few other parameters. The second option is to obtain token programatically, for example in the spring integration layer, where declarative approach might be difficult, non existent or depending on the extensive logic. In this case the approach is to create a client code that is annotated by the @EnableOAuth2Client. In our case this was done in the ClientHttpRequestFactory. Obtaining the token is achieved by the OAuthClient and OAuthClientRequest to our authorization server using client_credentials grant. 


In the end the goal we had was achieved and all flows are now functional and serving the purpose. The next thing we need to worry about is switching the framework to the OIDC coupled with OAuth2. This will, perhaps, be a good topic for one of the next blogs.

Sunday, April 15, 2018

MyBatis Paging, Sorting and Filtering

When considering which database persistence framework to use to complete the goal of development, we must take in consideration several factors:
  • Knowledge of your team and ability of the team leads to help out with problems
  • Available documentation
  • Maturity of the framework
  • Availability of the helper classes or supporting frameworks (e.g. how much custom functionality we need to create)
  • Underlying structure of the database and it's complexity (if exists)
  • Portability (if required)
  • "Top down" vs "bottom up" approach driven by either business requirement or existing technology
  • Whether we want or have to write SQL statements and how complex they need to be to fulfill the business goals
  • Do we "own" the data model or is this vendor maintained data model
As you can see, reaching the decision of what to use can be based on the  experience or trial and run errors.

In one of my recent projects, we reached the decision to use MyBatis as the framework that will enable us to fulfill most of the goals set upon us by the business and existing applications and database topology. In the enterprise environment, you may be faced with the decisions that span not only to your immediate application, but that may involve several others and their data models. We needed to do just that. Read data from the various data sources, integrate with several different web services (both REST and SOAP) and provide unified DB and API interface (facade). This interface, needed to bring web services and various databases together to work as one and with improved performance. Introducing ESB layer was needed but also we needed to create uniform data model (that we can model our facade objects on). MyBatis was perfect tool for this. The only problem we faced was the lack of dynamic paging, sorting and filtering (PSF) functionality, given that we needed to combine results from the different databases (where some of them had awkward design, to say the least). Hibernate was dead in the water here. We ended up using PL/SQL and SQL from various sources, using pipelines and extensive logic to bring order into the data models. This solution worked very well, and in the end, we only needed to implement and expose the Search + PSF to the API and clients. We chose this supporting framework to help us build dynamic additions to the query: squiggle-sql. In their own words:
Squiggle is a little Java library for dynamically generating SQL SELECT statements. It's sweet spot is for applications that need to build up complicated queries with criteria that changes at runtime. Ordinarily it can be quite painful to figure out how to build this string. Squiggle takes much of this pain away.
 This worked perfectly as we could expose REST API JSON with PSF parameters and reuse this through various interfaces. To avoid SQL Injection (as we were building these custom), we needed to use enumerations to match exact constructs, avoid certain risky operations like OR 1=1 or comments in filters, limit the field types and lengths. Overall, we achieved a good mix of security and usability with flexible interface.

We started first by defining the interface in JSON that must have Paging (or streaming), Sorting and Filtering functionality. Again, it is important to limit any functionality with Constants or Enums to make sure all constructs can exactly be matched to operations or underlying supporting Beans. This is important security feature.

Executing Paging in database is relatively straight forward in Oracle by adding this OFFSET <x> ROWS FETCH FIRST <y> ROWS ONLY. The other thing that is needed is to get total number of rows returned from database in order for GUI to calculate how many rows are present. This generally requires executing the statement second time without the row limiting clause or we could write a WITH statement in Oracle and execute once and link into broader query that can execute count on first occurrence and limit rows on second. Important constraint is that the page can start with 0 and up and you should not allow returning of huge pages. If you require a single result that has all records, this may be achieved by secondary API that is limited in use and users that can access it. The reason is that if parallel multiple request are executed on a huge underlying data set, it may lead on DoS type of attack.

Sorting can have multiple columns, so this means that we must match any columns for the sort with ascending or descending parameters and participating Bean properties. Sorting is added as ORDER BY clause. Important feature is to match ORDER BY clause by Enum and columns by underlying Bean. Bean that is exposed to the API should only carry fields that are necessary for API to function properly and to satisfy a business need. Any other database functionality regarding the tables should be hidden behind a services and transfer objects (ref). Order clause can be applied to the complex queries, e.g. multiple set joined by creating an encapsulating select statement around original request.

Filtering may be the trickiest one to implement due to wide range of criteria that can be applied. Same as before, using Enums to define operations and limiting search capability to the filter to use only single fields names and only AND operation is important for the aspect of security and speed. Allowing OR or free statement entry may be dangerous option for execution.

Overall, what we achieved is that now MyBatis has a potential to achieve some of the things that Hibernate has out-of-the-box.



Saturday, May 13, 2017

Spring Boot with JSF

I have open sourced an application, recently, that I created for my own time tracking and invoicing and published it on GitHub under the Apache 2.0 license. When I started working on it, I thought about which technology should I use and decided on Spring, more precisely Spring Boot, because that was something that is natural to me as I have used this for many years. But this would only serve a purpose for the middle tier and I still needed to think about what to use for the front end. Trend for the  Spring Boot is to use AngularJS or similar frameworks that would tie in it's capabilities and potentially be deployable on the platforms like Cloud Foundry or similar cloud based solutions. This is due to the fact that these are the stateless solutions, and due to their scalability and ease of deployment they are a good fit for the Spring Boot. However, once we start distinguishing between web applications and enterprise applications, depending on what they need to achieve, I believe that  JSF still plays a big role due to its robustness and number of out of the box components that we can use. PrimeFaces framework is something I have used extensively in the past years and it played well with older Spring implementations, but Spring Boot was not meant to work with this out of the box. I have found a project on the GitHub called JoinFaces to help me with this. JoinFaces incorporate best practices and frameworks from JSF world and allow it to work together with the Spring Boot. In an environment where we will need scalability and multiple server deployments, we would still need a solution to share sessions or create sticky client connections, but for my purpose, this was ideal. So here is the stack that I used:
  • JoinFaces
  • Spring Boot
  • Hibernate (with QueryDSL)
  • H2 database
  • BIRT reporting engine
These are the basic libraries that allowed me to create an easy deployable solution for the local computer or cloud based application depending on your need. This application still has only one deployable file, but we can easily abstract and create mid tier with business services if we needed micro services type of an app. So why use these components? Let's address my thinking behind each one of these:

JoinFaces


This is the project that ties JSF to Spring Boot and it can be found on this location. In their own words: This project enables JSF usage inside JAR packaged Spring Boot Application. It autoconfigures PrimeFaces, PrimeFaces Extensions, BootsFaces, ButterFaces, RichFaces, OmniFaces, AngularFaces, Mojarra and MyFaces libraries to run at embedded Tomcat, Jetty or Undertow servlet containers. It also aims to solve JSF and Spring Boot integration features. Current version includes JSF and CDI annotations support and Spring Security JSF Facelet Tag support.

Since it includes PrimeFaces and this is my favourite JSF engine, it was perfect for what I was trying to do. It is also up to date and well maintained.

Spring Boot


Spring Boot is the new flavor and trend for the development if you prefer this framework. This is what Maven was to Ant when it came out. The goal here is to pre-configure as many parameters as possible and autodetect dependencies. The whole Cloud Foundry was created to make development and deployment as easy as possible in the enterprise environment. In their own words: Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.

Spring in general was a revelation when it came out. It made developing in Java something that you can actually like. Using this as a wiring framework for the libraries is a really good fit in my opinion.

Hibernate (with QueryDSL)


Hibernate was always one of my favourite ORM frameworks. In the world where good programmers are difficult to find and where Java developers do not know or understand how to write a proper SQL, Hibernate offers to bridge this gap. Another good thing is that it will adapt to any supported database you can use. The bad thing is that you may not be able to use database specific things (e.g. hierarchy queries) or where you have complex and demanding queries. Hibernate also may not be a good option for optimisation of the complex queries. If your existing database model is also not 'by the book', you may experience additional problems. In their own words, Hibernate is: Hibernate ORM enables developers to more easily write applications whose data outlives the application process. As an Object/Relational Mapping (ORM) framework, Hibernate is concerned with data persistence as it applies to relational databases (via JDBC). 

You may fall back to JDBC from Hibernate in case you need to write database specific queries or write native type of queries.

QueryDSL is a good addition to JPA as it enables for an easy filtering and query structuring where this may be required. I found it to be very helpful. In their own words: Querydsl is a framework which enables the construction of type-safe SQL-like queries for multiple backends including JPA, MongoDB and SQL in Java. Instead of writing queries as inline strings or externalizing them into XML files they are constructed via a fluent API.


H2 database


I have gone through several databases (Derby, H2 and HyperSQL) for this project and H2 for me was easiest to setup, backup, run in shared mode and just support a lot from the SQL standard. Very fast to initialise and super easy to understand. It integrated well into the application and so far I have not seen any issues with it. In their own words: Welcome to H2, the Java SQL database. The main features of H2 are: Very fast, open source, JDBC API, Embedded and server modes; in-memory databases, Browser based Console application, Small footprint: around 1.5 MB jar file size.


BIRT reporting engine


For invoices, I required an open source reporting engine that was free. I looked into two: Jasper Report and BIRT. I started with Jasper as it seemed easier to integrate but once I started doing  development for the subqueries is when the things got 'interested'. They both have a good UI for designing reports but I found BIRT to be easier and faster to work with. The main difference is that Jasper enables absolute positioning of the elements and BIRT does not. With some planning, this really is not relevant. In their own words: BIRT is an open source technology platform used to create data visualizations and reports that can be embedded into rich client and web applications.


In the next few blogs, I will try to explain what I did, how and why. You may find the application that I called My Open Invoice on GitHub.

Thanks


Wednesday, December 16, 2015

Books 2

Here is the new batch of the books that I have read and liked.

[Homeland] Cory Doctorow wrote a sequel to Little Brother. His protagonist, Marcus, continues the fight against the police state. If you enjoyed the first book, this is certainly going to entertain you. You can get the book here.










[Lost Symbol] The Lost Symbol is a masterstroke of storytelling that finds famed symbologist Robert Langdon in a deadly race through a real-world labyrinth of codes, secrets, and unseen truths . . . all under the watchful eye of Brown’s most terrifying villain to date. Set within the hidden chambers, tunnels, and temples of Washington, D.C., The Lost Symbol is an intelligent, lightning-paced story with surprises at every turn. You can get the book here.







[Inferno] Harvard professor of symbology Robert Langdon awakens in a hospital in the middle of the night. Disoriented and suffering from a head wound, he recalls nothing of the last thirty-six hours, including how he got there . . . or the origin of the macabre object that his doctors discover hidden in his belongings. Langdon's world soon erupts into chaos, and he finds himself on the run in Florence with a stoic young woman, Sienna Brooks, whose clever maneuvering saves his life. Langdon quickly realizes that he is in possession of a series of disturbing codes created by a brilliant scientist-a genius whose obsession with the end of the world is matched only by his passion for one of the most influential masterpieces ever written-Dante Alighieri's dark epic poem The Inferno. Racing through such timeless locations as the Palazzo Vecchio, the Boboli Gardens, and the Duomo, Langdon and Brooks discover a network of hidden passageways and ancient secrets, as well as a terrifying new scientific paradigm that will be used either to vastly improve the quality of life on earth . . . or to devastate it. You can get the book here.

[A.I. Apocalypse] Leon Tsarev is a high school student set on getting into a great college program, until his uncle, a member of the Russian mob, coerces him into developing a new computer virus for the mob’s botnet - the slave army of computers they used to commit digital crimes.

The evolutionary virus Leon creates, based on biological principles, is successful -- too successful. All the world’s computers are infected. Everything from cars to payment systems and, of course, computers and smart phones stop functioning, and with them go essential functions including emergency services, transportation, and the food supply. Billions may die.

But evolution never stops. The virus continues to evolve, developing intelligence, communication, and finally an entire civilization. Some may be friendly to humans, but others are not.

Leon and his companions must race against time and the military to find a way to either befriend or eliminate the virus race and restore the world’s computer infrastructure. You can get the book here.

[Influx] Are smartphones really humanity’s most significant innovation since the moon landings? Or can something else explain why the bold visions of the 20th century—fusion power, genetic enhancements, artificial intelligence, cures for common diseases, extended human life, and a host of other world-changing advances—have remained beyond our grasp? Why has the high-tech future that seemed imminent in the 1960s failed to arrive?

Perhaps it did arrive…but only for a select few. You can get the book here.





[Flash Boys] In Michael Lewis's game-changing bestseller, a small group of Wall Street iconoclasts realize that the U.S. stock market has been rigged for the benefit of insiders. They band together—some of them walking away from seven-figure salaries—to investigate, expose, and reform the insidious new ways that Wall Street generates profits. If you have any contact with the market, even a retirement account, this story is happening to you. You can get the book here.






[The Money Bubble] The US, Europe and Japan are making financial mistakes that will soon cause a crisis of historic proportions. This book explains those mistakes and the likely shape of the crisis, and offers advice to those hoping to protect themselves and profit from what's coming. You can get the book here.









[Beginning Python] is not a bad book to remind yourself of the basics. It is written for Python 2.4.

This tutorial offers readers a thorough introduction to programming in Python 2.4, the portable, interpreted, object-oriented programming language that combines power with clear syntax
Beginning programmers will quickly learn to develop robust, reliable, and reusable Python applications for Web development, scientific applications, and system tasks for users or administrators
Discusses the basics of installing Python as well as the new features of Python release 2.4, which make it easier for users to create scientific and Web applications
Features examples of various operating systems throughout the book, including Linux, Mac OS X/BSD, and Windows XP




This is it for now and if you liked my selection, please drop me a note or recommend another book.

Thursday, April 23, 2015

Aspose document generation

Hello,

In today's business world reporting comes out as the end result of the functional application and it serves the purpose of giving introspection into the system, functionality, current and future needs and much more. Having a good system to produce reports is a challenging requirement as one has to balance functionality, requirements, price, potential support needs, easiness of use and widespread acceptance in development community, documentation, performance, interoperability etc. There is a huge potential of various products on the Internet when searching for a suitable framework to satisfy such needs.

During one of my last projects, we needed to accommodate several requirements in finding a product that can do it all :). I believe that we have come across a product that in our testing and POC proved to be a right choice. That product is Aspose.

Aspose is a leading vendor of .NET, Java, Cloud and Android APIs, SharePoint components and rendering extensions for Microsoft SQL Server Reporting Services and JasperReports. They provide various products for working with Word, Excel, Images etc. To make the process easier, Aspose offers a license for all of the products called Total license. The pricing is very competitive too.

The benefit of using the Aspose is that end reports or document templates can be designed by business users in form of Excel of Word document and Aspose can be used to programmatically populate these documents. Documentation is very good with a lot of examples. Support is done primarily through the forums. The response times for support requests at the time I was writing this were very good even for unpaid support.

Usefulness of the Aspose comes from the versatility of the product, in a way that we can for example, start working with Word and add Excel details or pull parts of it, convert and standardize both and produce combined PDF, and then insert images or graphs on the go. Product comes in a form of library and supports both Java and .NET worlds. Library use is straight forward and license is loaded through the code before invocation of the function calls. Documents can be processed from the files or from the streams (if you keep them in the database). Library supports multithreading and in our tests proved to be very consistent and without apparent memory leaks. Data population uses Mail Merge functionality from Word, and supports both plain fields and repeating fields (like tables). Nesting tables require some playing around design in Word documents but it is doable and result looks very good. The product can run in both Windows and *NIX world and having Word or Excel installation in not needed.

I also have to add that libraries can be converted into OSGi friendly libraries and they work quite well in OSGi container too, whether you package them as JAR dependency or expose them as an OSGi bundle.

When working with Aspose and Mail Merge, data stream is established through the java.sql.Statement and has to be passed like this. We have tested functionality on both Oracle and DB2 and both work really well. Minor overriding needs to be done in case of BLOB fields to accommodate output based on the client library that you want to use.

A sample generic application (you can find this code on Aspose site in various sections):