Having been on the receiving end of Request for Information (RFI) and Request for Proposal (RFP) responses, from an evaluation perspective there are ways respondents can make it easier for the evaluation panel to assess what it is being proposed and ultimately have greater success on getting through to the next round. These considerations are from my experiences with Software package selection and with Delivery partner selection, but should be applicable to many other selections. 1. First impressions count. Even before the RFI/RFP response is opened, an evaluator can be swayed by the presentation of the response and the level of engagement getting there. Key considerations: Ask questions during the response period to validate any areas lacking clarity, but don’t go overboard. Make sure you meet the response times. Use good quality paper and colour (if required to present a paper copy). Binding can make a document look classier. If the response requests that all questions a
Today was the second and final day of the IT & Enterprise Architecture Conference 2015. Below is a summary of my key notes. Going beyond IT. What EA can really mean for your organisation John Pearson, Business Architect, IAG Consider the Demand vs Supply side of architecture IAG is using traditional TOGAF domains (Information, Application, Technology Infrastructure and Security) with business architecture being used to better align with the demand from the business Enterprise Business Motivation Model (EBMM) - Accenture, Nick Malik ( http://www.motivationmodel.com/ ) anchor diagram for business architecture using to understand change impact Bake business architecture approaches in early in the architecture journey Understand the business, its priorities and where value can be added Understand architecture capabilities in service to the business Understand key stakeholder needs and communication preferences Become involved in the strategy conversation Technol
" Site Reliability Engineering: How Google Runs Production Systems " edited by Betsy Beyer, Chris Jones, Jennifer Petoff and Niall Richard Murphy contains a number of insights into Google's SRE practice. It is a bit repetitive at times but this assists in drilling in some of the key points. My key take aways were: Google places a 50% cap on all aggregate "ops" work for all SREs - tickets, on-call, manual tasks, etc. The remaining 50% is to do development. If consistently less than 50% development then often some of the burden is pushed back to the development team, or staff are added to the team. An error budget is set that is one minus the availability target (e.g. a 99.99% availability target will have a 0.01% error budget). This budget cannot be overspent and assists in the balance of reliability and the pace of innovation. Cost is often a key factor in determining the proposed availability target for a service. if we were to build an operate
Comments
Post a Comment