AWS vs Oracle Cloud – My Experience

Oracle Cloud

Preamble

Having had a good impression from my use of Amazon Web Services (AWS), I decided to take a look at Oracle Cloud. AWS is of course the market leader but Oracle, Microsoft et al have joined the fray with competing services. What follows is nothing like a Gartner report, rather it is my personal experience of the Oracle and AWS services, as an end user. AWS is already well known and is perhaps the benchmark by which all others are presently measured. This article maintains that same perspective. The narrative is of my experience of Oracle Cloud and how it compares with AWS.  To keep this brief, I will only mention the cons.

Setting Up

As with every online service, Cloud or otherwise, you need to sign up and configure an account with the provider. The Oracle Cloud account set up involved more steps than the AWS. The telling difference though was in the time it took for the account to be available for use. Whereas the AWS account was all ready to go within a day, it took about 3 working days for my Oracle account.

The second difference was in communication. AWS sent me one email with all I needed to get started; Oracle sent me about 5 emails, each with useful information. I had some difficulty logging on to the Oracle service at first. But this was because I thought, wrongly, that the 3 emails I had received contained all that I needed to log in. The 4th email was the one I needed and with it in hand, login was easy and I could start using the serivces – the 5th email was for the VPN details.

Oracle Cloud Services

Having set up the account, I was now able to login and access the services. I describe my experience under four headings: user interface, service granularity, provisioning and pricing.

:User Interface

I will describe the interface usability in three aspects: consistency, latency, and reliability.  First is consistency.  On logging in to the Oracle Cloud, there is an icon on the top left hand corner that brings up the dashboard.  This is similar to AWS.  However, clicking that same button in the default, database, storage and SOA home pages results in a different list of items in the dashboard display.  This can be confusing as users may think they have done something wrong, or lost access to one of their cloud services.  The second, latency, is also a big problem with some in-page menus.  Response time for drop-down lists can be painfully slow and there is no visual indicator (hourglass) of background processing.  In extreme cases, latency becomes an issue of reliability.  There are times when in-page menus simply failed to display despite several clicks and page refreshes.

:Service Granularity

The area of concern is IaaS, and the two services I had issue with were compute and storage.  The choice of OS, RAM, CPU, etc. available when selecting a compute image is quite limited.  Mostly Oracle Linux and a chained increase in RAM and CPU specifications.  When creating storage; it appears that there is only one “online” storage class available – standard container.  The usage of the terms “container” and “application container” was a bit confusing in this context.  This is especially so when trying to create storage for a database that will be used by Oracle SOA Cloud.

:Provisioning

Provisioning is the fulfilment of the request to create a service package (IaaS, PaaS or SaaS).  The time it took to create a database (PaaS) instance was in excess of 30 minutes.  Perhaps this was due to the time of day and high concurrency.  Nevertheless, given that I could run a script to do same on my laptop within 10 minutes, one would expect equal or better from the cloud.  The delay is even longer with creation of the Oracle SOA Cloud instance; this took well over 2 hours to complete.  Scripted creation of instances would be much quicker on a modest PC.  For cloud services, images should provide even quicker initialisation of instances from templates.

:Pricing

This could be the elephant in the room.  Whereas options were few and insufficient in IaaS, the array of pricing for almost identical PaaS offerings was long and rather confusing.  Unlike AWS, there are only two schedules: hourly or monthly.  There are no options to reserve or bid for capacity.  Finally, even the lowest prices are quite high from the perspective of an SME.  The granularity of billing needs to be reduced or the composition of IaaS and PaaS should give greater flexibility.  Either way, entry prices need to be attractive to a larger segment of the market.

Summary

A great impediment to comparison is the short trial period allowed for Oracle Cloud services.  The 30 day allowance is insufficient, except for those with a prepared plan and a dedicated resource.  Such an exercise would in itself amount to no more than benchmarking, leaving little room for gaining a real feel for the services.

We should set aside the latency issues in setup, user interface and provisioning.  These are easy problems to resolve and it is likely that Oracle will fix these very soon.  The output of provisioning for Oracle-specific PaaS and SaaS services was very good and compares favourably with AWS.  One advantage of the Oracle PaaS is the simple configuration of requisite security to gain access for the first time.  This meant that the PaaS services were very quickly available for use without further tweaking of settings.  The shortcoming, as previously stated, is that provisioning can take quite a while.  Overall, the use of the Oracle PaaS was seamless and integration with on-premise resources was easy.  The only exception being JDeveloper, which could not integrate directly with Oracle SOA Cloud instances.

Competition

AWS has the benefit of early entry and has a far richer set of services.  But the feature set is not an issue since Oracle is not into Cloud as an end, but as a means to extend the availability of existing products and services.  However, even in the limited subset where there is an overlap, AWS provides finer granularity of services/features, better interface usability, and a much more alluring pricing model.

Oracle has fought many corporate and technology battles over the years.  The move to Cloud space is yet another frontier and changes will be needed in the following areas.

  • Open up the options and granularity of IaaS offerings
  • Address significant latencies in provisioning PaaS services
  • Revise the pricing model to accommodate SMEs
  • Totally refresh the flow and performance of web pages

The Cloud has arrived, like the Internet, underestimated initially, but it promises likewise to revolutionise IT.  This market will certainly benefit from competition and I surely hope that Oracle will take up the gauntlet and offer us a compelling alternative to AWS – horizontal and vertical.

God bless!

Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 793 920 3120

Impressive Amazon Web Services: First Glance

Impressive Amazon Web Services (AWS)

Amazon Web Services: Background

Amazon Web Services (AWS) have revolutionised the way we view IT provisioning.  AWS makes so many things easier, and often cheaper too.  The benefits scale from the SME right up to corporates; no segment is left out.  Complexity is abstracted away, and with a little effort, large and/or complex systems can be built with a few clicks and some configuration.

Architecture

We decided to take a quick look and see just how much the AWS could offer low-budget SMEs.  Using our company’s existing platform as the subject.  We have one Oracle database and a handful of MySQL databases; an application server and a Web Server fronting for the application server and several CMS-driven sites.  The application server runs Java web services that use data from the Oracle database.  The web server hosts the pages for the Java application.  It also servers a number of WordPress/PHP sites that run on data from the MySQL databases.  The logical view is illustrated in the diagram below:
AWS nettech Logical ViewWe could map the logical view to one-to-one service units in AWS, or rationalise the target resources used.  AWS provides services for computation for web and application (EC2) Shell scripting (OpsWorks), data (RDS) and static web and media (S3), and other useful features; Elastic IP, Lambda, IAM.  So, we have the option to map each of the logical components to an individual AWS service.  This would give us the most flexible deployment and unrivalled NFR guarantees of security, availability and recoverability.  However, there would be a cost impact, increased complexity, and there could be issues with performance.

Solutions

Going back to our business case and project drivers; cost reduction is highlighted.  After some consideration two deployment options were produced (below), and we therefore chose the consolidated view. The Web, application and data components were targeted at the EC2 instance as they all require computation facilities.  All the media files were targeted at the S3 bucket.  The database data files could have been located on the S3 bucket but for the issue of latency, and costs that would accumulate from repeated access.

AWS nettech physical viewThe media files were targeted to the S3 bucket due to their number/size (several Gbs).  The separation ensures that the choice of EC2 instance is not unduly influenced by storage requirements.  The consolidated view allows us to taste-and-see; starting small and simple.  Over time we will monitor, review and if need be, scale-up or scale-out to address any observed weaknesses.

Migration

Having decided on the target option, the next thing was to plan the migration from the existing production system.  An outline of the plan follows:

  1. Copy resources (web, application, data, media) from production to a local machine – AWS staging machine
  2. Create the target AWS components  – EC2 and S3, as well as an Elastic IP and the appropriate IAM users and security policies
  3. Transfer the media files to the S3 bucket
  4. Initialise the EC2 instance and update it with necessary software
  5. Transfer the web, application and data resources to the EC2 instance
  6. Switch DNS records to point at the new platform
  7. Evaluate the service in comparison to the old platform

AWS nettech physical view III

Implementation

The time arrived to actualise the migration plan.  A scripted approach was chosen as this allows us to verify and validate each step in detail before actual execution.  Automation also provided a fast route to the status quo ante, should things go wrong.  Once again we had the luxury of a number of options:

  • Linux tools
  • Ansible
  • AWS script (Chef, bash, etc.)

Given the knowledge that we had in-house and the versatility of the operating system (OS) of the staging machine, Linux was chosen.  Using a combination of AWS command line interface (CLI) tools for Linux, shell scripts, and the in-built ssh and scp tools the detailed migration plan was to be automated.  Further elaboration of the migration plan into an executable schedule produced the following outline:

  1. Update S3 Bucket
  2. Copy all web resources (/var/www) from the staging machine to the S3 bucket
  1. Configure EC2 Instance
  2. Install necessary services: apt update, Apache, Tomcat, PHP, MySQL
  3. Add JK module to Apache, having replicated required JK configuration files from staging machine
  4. Enable SSL for Apache … having replicated required SSL certificate files
  5. Fix an incorrect default value in Apache’s ssl.conf
  6. Configure group for ownership of web server files www
  7. Configure file permissions in /var/www
  8. replicate MySQL dump file from staging machine
  9. Recreate MySQL databases, users, tables, etc.
  10. Restart services: MySQL, Tomcat, Apache
  11. Test PHP then remove the test script …/phpinfo.php
  12. Install the S3 mount tool
  13. Configure the S3 mount point
  14. Change owner, permissions on the S3 mounted directories and files – for Apache access
  15. Replicate application WAR file from staging machine
  16. Restart services: MySQL, Tomcat, Apache
  1. Finalise Cutover
  2. Update DNS records at the DNS registrar and flush caches
  3. Visit web and application server pages

Anonymised scripts here: base, extra

A few observations are worthy of note, regarding the use of S3.  AWS needs to make money on storage.  It should therefore not be surprising that updates to permissions/ownership, in addition to the expected read/write/update/delete, count towards usage.  Access to the S3 mount point from the EC2 instance can be quite slow.  But there is a workaround: use aggressive caching in the web and application servers.  Caching also helps to reduce the ongoing costs of repeated reads to S3 since the cached files will be hosted on the EC2 instance.  Depending on the time of day, uploads to S3 can be fast or very slow.

Change Management

The cut-over to the new AWS platform was smooth. The web and application server resources were immediately accessible with very good performance for the application server resources.  Performance for the web sites with resources on S3 was average.  Planning and preparation took about two days.  The implementation of the scripts for migration automation took less than 30 minutes to complete.  This excludes the time taken to upload files to the S3 bucket and update their ownership and permissions.  Also excluded is the time taken to manually update the DNS records and flush local caches.

Overall, this has been a very successful project and it lends great confidence to further adoption of more solutions from the AWS platform.

The key project driver, cost-saving, was realised, with savings of about 50% in comparison with the existing dedicated hosting platform.  Total cost of ownership (TCO) improves comparatively as time progresses.  The greatest savings are in the S3 layer, and this might also improve with migration to RDS and Lightsail.

In the next instalment, we will be looking to extend our use of AWS from the present IaaS to PaaS.  In particular, comparison of the provisioning and usability of AWS and Oracle cloud for database PaaS.  Have a look here in the next couple of weeks for my update on that investigation.

 


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 793 920 3120

Oracle BPEL Subprocess

There are quite a few new features in Oracle BPEL 12c and here is one that I really like; the BPEL subprocess – a task/algorithm that is an association (not composition) of an activity/process.

The concept has been around for decades and it has featured in process diagrams and BPMN notation for quite a while, but there was no equivalent in program code.  Oracle has finally addressed this lacuna by releasing a feature in BPEL that closely approximates the design concept.  Please note that it is close but not identical in semantics; but that again is to be pedantic – pardon the pun!

The subprocess allows a programmer to define a block of code or algorithm that will be reused as-is at various points within one or more BPEL processes.  I took the liberty of taking a few screenshots of active code from a recent client in order to illustrate a subprocess in action.  Before diving in though, it is useful to set the context.  The subprocess is being used in this scenario to wrap the invocation of operations on a service, and the task is named here as “QueryMaximoSP” – see highlighted item in the diagram below.

Call to the subprocess from the parent process

sub-process
sub-process icon
sub-process assignment

 

This subprocess wraps all the operations on the AWMaxAssetABCS service, and can determine which one is to be invoked by examining the inputs it receives from a main process.  In this simple example, the choice is implemented as if/else paths for “query” and “update” operations.  Once the operation has been determined invocation of the target service follows, using the input that was passed in by the main process.

Subprocess internal details

sub-process detail

There are three benefits that this concept brings to BPEL code:

  • Simplification of design view
  • Improved reusability
  • Improved modularisation

In the design view, hiving off some code into a subprocess frees up the screen from clutter and makes it much easier to see the big picture of what we are doing in the main process.  But what becomes of all this delegated code in production systems?  One fear may be that the subprocess will be displayed as a black-box.  This is not the case, all steps in the subprocess will be revealed at runtime, but only if they are executed.

Modularisation is improved because programmers can delegate as much logic to the subprocess as needed, including specialised error-handling and rollback, pre/post processing, and other conditional processing.  All of this functionality is thereafter available to all processes within a SOA composite, and each can invoke (reuse) the same subprocess multiple times with different inputs.

In the diagram below, input has been provided for the query operation while the update operation has been left blank, and so only the query operation will be acted upon in the subprocess.  Notice also that the “Copy By Value” checkbox of the output variable has been left unchecked.  This is important, as the invocation is likely to fail on the output variable if the checkbox is checked.

sub-process assignment

I have found it really useful to encapsulate the invocations to partner services in subprocesses, so that I can control all necessary pre and post processing in one place, as well as any necessary error handling and recovery that is specific to that partner system.  In future, it would be nice to have subprocesses that more closely resemble the concept in process models, i.e. a unit of functionality that can be reused across the domain, not just within one composite.  Now, some would argue that we should create a new composite to encapsulate the required functionality, but of course that would not be the same thing as in the model, and such a construct would also be significantly slower, not to mention that it would not fit nicely into a service portfolio.  Let’s wait and see if Oracle will be tweaking this feature in the next release of the JDeveloper and SOA Suite; for now though, there are already some great benefits realised in subprocesses.  I hope you find it useful too.

Regards and God bless.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 [0] 793 920 3120

The Fundamentalist SOA

Rhema-Bytes: The Fundamentalist SOA

Fundamentalism in SOA is to see everything as a service. Raise this thinking to the strategic level and one starts to ask questions of services, such as: “is this a core competence”, and “do we provide it as effectively as others”.

At the technological level, the questions that come to mind are: “do we understand the business requirement correctly”, “have we properly decomposed the requirements into functional units”, and “do we have generic patterns that can be used to implement a solution for each functional unit”.

The end game, in my humble opinion, of this fundamentalist SOA is to have synthesised an abstraction of one’s business context and provided genericised implementations for every functional aspect that is amenable to automation. At this point, IS/IT would have created a platform that delivers the business in a “label” agnostic engine, and this engine could be optimised by trimming off those units that are better/cheaper provided by outsiders.

This SOA enables business by providing a performant platform for delivering services to clients, and the opportunity to outsouce non-core-competencies and inefficient implementations.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 [0] 793 920 3120

Rhema Bytes: The Business to SOA Nexus

I have a little confession to make, one of many; it is that I maintain a simplistic view of business. In my simple view, a business starts as an idea or vision, depending on your Weltanschauung, and this idea articulates some value (services or products) that the business will provide to the community – Business (B) or Consumer (C).  It is vital to mention that “service” in this context is an abstraction and has nothing to do with technology.  These services will often require some input, manipulation and output, which will be conducted by humans together with some machinery. The complement of input, manipulation and output, when formalised is sometimes referred to as a standard operating procedure (SOP) or process.

Now for the nexus…
For the business idea/vision to become a reality, there will need to be some transformations in the real world.  These transformations will start from nothing, and incrementally deliver concrete things that advance the business towards the realisation of the vision.  Each transformation is realised by way of projects, and the projects may identify some opportunities for automation of the SOPs/processes mentioned above. Automation of an SOP would be by way of implementation as one or more technological services.

The SOA for me therefore should include a focus on this initial vision and how it filters down through the transformation programmes and/or projects, down to the individual services, as well as the portfolio/complement of all services that serve the business. My simplistic view is that there is a minimum set of core technological services that are needed to support a particular genre of business. That this magic complement can be expressed in a generic form that is not tied to the name/identity of the business that it serves.

To achieve this, the SOA perspective needs to be different: services should be conceptualised with the perspective of an agnostic/outsider. The architect should try to see themselves as a third party providing that service to businesses in general and not to the organisation in particular. This perspective should also be broad enough to identify services that are best factored out, and those that can be profitably re-packaged into a higher value offering.

The end goal being the discovery and clear articulation of the magic complement of services that supports the parent business. However, a delightful side-effect should be the realisation of a platform that can, in part or whole, serve other businesses of the same genre or in the same sector. I am of the opinion, simplistic I agree, that often we are too timid or parochial in our view of solutions. There are not so many unique reference architectures out there, and in the same sector one will encounter so many different implementations even among very similar businesses. I believe we can do better.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 [0] 793 920 3120

Factory Approach to Service Engineering

Service Factory

Rhema Bytes: A Factory Approach to Service Engineering

Service Factory
Service factory abstraction

When most people think of a factory, the imagery that is conjured is one of mindless repetition, and the generation of large numbers of low-value items. A good example is a nut & bolt factory. In this world, value accrues to the investors from the little profit made on each one of the millions, or billions, of items.

This does not tell the full story of factories. There is another view that most of us do not readily think of. I call this genre, a compositing factory. Good examples are found in the many custom bike shops found across the USA. Many of who arrange engines from Harley Davidson, and kit from other suppliers, into dream-machines especially tailored for their high-end clients.

Both perspectives have one thing in common. In real life, there are the designers that articulate a general template of the “thing”. And there will be the producers that directly replicate the template, or customise it before replication.  The nut & bolt represents the former (direct-replication), while the custom bike shop illustrates the latter (customised-replication). There are templates and meta-templates for both bike and nut. The nut template will be driven by permutations on materials, size and strength, whereas the bike template is a composition of an engine, frame, gears, tyres and some other bits.

In SOA architecture and design, we are also concerned with templates (ABB and SBB in TOGAF). Our templates are sometimes abstract, sometimes concrete, sometimes composite, sometimes atomic. Whether as a reference architecture, or a component design, the focus is on a template that solves a generic problem. However, most of the time, these templates are not to be replicated verbatim. Their value is almost always realised in some composition or aggregative context. Some intelligence in application being sine qua non.

For any enterprise, there will be a minimal set of services that must be realised for the organisation to be a meaningful participant in its sector. In addition to these core services, there are others that help to differentiate the organisation. These can be regarded as the macro templates. At the micro level, we find that each genre of service must complete certain tasks in order to deliver meaningful value to clients. Once again there could be differentiation by way of order, algorithm or complement, but by and by there will be a minimal set of tasks, that all must do.

If we apply the mindset of the custom bike shop to our architecture practise, we should see quite a few tools in-house that we can use/reuse. Some that can be bought, and a few that we need to fabricate. I have found that while many enterprises adopt the “reuse-buy-build, respectively” principle, not all evaluate the comparative costs of these options before making a decision. The consequence is that build, and buy, usually outnumber reuse in most organisations. In the cases where there is reuse, existing services are rendered functionally ambiguous to cater for slightly different use cases.

In a previous article, “Rhema Bytes: The Business to SOA Nexus”, it was argued that architecture should seek to create a platform of agnostic services that are well suited to serving the genre of an organisation, rather than the organisation specifically. If one were to decompose an enterprise, top-down, it should be somewhat easier to identify functionality at its most granular level. Top-down decomposition helps identify functionality at the highest level of abstraction. The analysis of each granular functional unit can help determine the comparative value of reusing, building or buying services that provide the required competence.

So, for a new business initiative that delivers services X, Y. and Z. We could ask if there is a Harley Davidson engine that fulfills that X, a Volvo axle (Y1), Saab transmission (Y2), and Toyota electrics (Y3) that deliver Y, and if a component Z5a is truly unique, or needs to be built, alongside Z1..Z3, and Z4c that do not already exist in our catalogue.

Each service, whether bought, built, or reused, is then properly catalogued as to the value it provides, its comparative costing, and what contexts it is to be used in. Such a compendium, built over time, makes it much easier to assemble solutions. Every installment of this approach makes the next assembly simpler and quicker. This is because most unique use cases/scenarios are covered off in the early solutions, and subsequent projects will reveal fewer unseen scenarios.

A lasting benefit of this mindset is that federation and outsourcing is made that much easier, since the templates for the product/service or its composition are predetermined. This means that production and assembly can be separated, and the build and testing are more effectively decoupled. In a previous article, “Rhema Bytes: SOA Services Abstraction” one such model for templating service genres in a SOA is explored. Combining this mindset with the pieces identified in that article should result in a flexible, nimble and responsive “service factory”.

Presentation: AFactoryApproachToServiceEngineering.ppt

Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 [0] 793 920 3120

Making the best of your 9-5

Canal Digital office in Fornebu

Work smart, not hard.Canal Digital office in Fornebu

I did a survey of the generation of my dad and his friends recently, and I made an interesting discovery. Most of the “fun” guys are still alive, but the serious and really hard working ones are all gone! I have decided not to end up like the latter group.

The vast majority of human beings spend their best years working. Work occupies most lives between the ages of 20 and 60. Work consumes the best of the waking hours of each day. Work straddles 5 out of 7 days of each week. Every working day, most adults will invest 10 hours attending to their work – about 2 hours commuting and 8 hours on the job.

Every year, we spend more time at work and with our co-workers than at home with our family, or out and about with our friends. Work environments and outcomes have a great impact on the health and life of all but the very rich, and the few contented. Work-related stress often mars the peace at home, but the brevity of the rest and comfort at home greatly limits its impact on work. Perhaps tellingly, a worker is most likely to suffer a heart attack between Sunday night and Monday morning. Wow!

So, make the best of your working life, by choosing to enjoy each working day! It is simple but profound advice; don’t ignore it. Sit comfortably; your butt is in that contraption for many hours each day. Take every opportunity to laugh. I can assure you, it is much better than crying. Laugh at your challenges, laugh at your mistakes, laugh with others as they too stumble through situations. Guess what, most of what seems monumental today, will be irrelevant in a few months, if not a few days. The crisis/catastrophe of today will be next year’s dinner joke. That is a long time waiting to enjoy a joke, especially if you will be the subject.

Don’t scrimp on lunch, you have earned it. Eat well and have a large drink, preferably non-alcoholic; it is lunch, not dinner. Try not to eat alone, you may think there is a shortage of good company and hypocrites crouch at every table. Well, dive in, one more wouldn’t hurt the mix 🙂
Thinking of something nice to nibble? Why not buy enough for a dozen. We all enjoy surprise treats, and niceness is really infectious. You may also be interested to know that it will not alter your financial trajectory. If you are going to be poor, middle-income, or rich, that extra few dollars a week or month will not break the bank. But it will make a big difference to the mood in the office.

Enjoy the environment. Don’t wait until you are in an ideal or dream premises. You may be here for a while, so try to enjoy every day of it. A stroll after eating is healthy, and it just might rein in that runaway waist. If you are up for the challenge, take the stairs rather than the lift. If you keep it up, you may have completed the equivalent of a marathon by the end of each year!

Treat your colleagues well. The opposite is like crapping in the village stream. Everyone gets sick eventually. No one likes being treated badly, and most will find a way to get even. However long it takes. Show consideration and compassion to those who struggle. It is really nice to be nice! You will feel good for it, and the beneficiaries will not forget it/you in a hurry. Payback may not come straightaway, but you will have increased the likelihood of experiencing same empathy sometime in the future.

When away from work try to shut down and enjoy life. Spend quality time with your family and friends, remember you came to work so you can maintain your family and keep up with your friends. No one goes to work to find family or to reach friends. If you drop dead tomorrow, your family will be devastated, your friends will be distraught, and your employer will find a replacement. Get your perspective right. If you were to wake up sick tomorrow, make sure that you can take consolation in a truly enjoyable yesterday.

If you don’t like your job, its time to change. Trust me, it is not worth it. Don’t be conned into “living” in the office. Work-life balance is vital. You just about make that on a 9-5, but certainly not on a 9-9. Don’t over-exaggerate your importance or the value of the task at hand, remember, this is a marathon, not a sprint. Retirement these days is granted in the latter 60s, so if you are still in your 20s to 50s, remember, you may have decades yet ahead of you. Take it easy.

Spend a few minutes reading, viewing or listening to things that delight you. Preferably things unrelated to your office work. You will feel reinvigorated after the break, and new ideas may emerge to hitherto intractable problems. Remember to get up and walk once every hour or so. It is good for your legs, bottom, and eyes. You should save yourself for retirement. You will need those arms and legs if you are to enjoy the fruit of your decades of labour.

Work smart, not hard.

I have acted to change my perspective; to work smart, not hard, and I hope to be around to enjoy a retirement. I pray you will be there too!
Have a great day, work wise!
God bless.


Oyewole, Olanrewaju J (Mr.)
Internet Technologies Ltd.
lanre@net-technologies.com
www.net-technologies.com
Mobile: +44 [0] 793 920 3120