April 1, 2020
  • 5:53 pm Joe Biden to address impeachment and Trump-Ukraine whistleblower call , live stream
  • 5:53 pm President Trump’s 2020 State of the Union address and the Democratic response (FULL LIVE STREAM)
  • 5:53 pm Video – Need help? Call Blue Card Services!
  • 5:53 pm Temple University Student’s Viral Tik Tok Video Calling North Philadelphia ‘The Ghetto’ Causes Outra
  • 4:52 pm @TorontoPolice News Conference Re: Homicide #54/2016 Jarryl Hagley, 17 | Fri Oct 21st, 1pm
THE WEBCAST = LOW Risk and HIGH Reward in Application Retirement with Archon and InfoArchive


We’re talking about here around risk and
reward before I continue the presentation here I wanted to highlight what we leave
our risk and reward because we spend so much time in technology talking about
the risk rewards of turning new things on implementing new capabilities trying
to push out things to new audiences and what today we’re going to talk about is
the risk reward the risk of not turning old things off and how you can actually
do a lot to help your enterprise and be rewarded by how you start to clean
things up that exists in your enterprise so our agenda today is we’re going to
you know kind of get into how we got here
to some degree get into what you can do about it and then part of this
presentation will include a demo of the product and kind of some next steps
generally in the IT landscape for any organization I don’t care how big you
are how small you are how you got here there is this theoretical balanced
portfolio of Technology kind of going from left to right you
have the newest of things you’re doing you have those things you just turn on
we put the young category we have those things that are going kind of from
departmental to enterprise let’s call those adolescent and then we have those
technologies that are kind of in their prime phases or they’re actively being
used by everyone then we have those technologies start to be kind of let’s
call it long in the tooth and then from that we should kind of move those out of
the portfolio but where we really are today is in more of a kind of distorted
portfolio we have new things coming in the door we have young things up less at
lesson things but from once you move from left to right we go from mature
it’s a senile the zombie in fact this is pretty much the exact way in which
Gartner is talk about this group four years ago around this need for an
application Undertaker in an organization how do you contend with
those things on the far right-hand side of this distorted portfolio of you
because they are consuming more and more of your IT resources more and more of
your data center more and more into causing more and more problems in your
enterprise around your ability to do what you want to do that so challenges
facing all technology organizations and what
this thing about some of these challenges are as Hardware is costing
less as cloud services have this decreased overall cost from a platform
perspective the overall costs are going up maintenance is going up Labor’s going
up and in the need and it but but it’s also I think a function of also how much
more important the IT departments are today in enterprises than ever before so
there is a cost rising problem we also have this outdated system
problem so platform 3 is named after the idea that the first platform was the
mainframe large raised floor water-cooled batch processing mainframes
the second platform was those first generation of open systems to do those
big enterprise things generally on premise of browser base the third
platform where where everyone’s trying to go to next is this nimble hybrid
cloud service model with a whole new paradigm of how you develop so we want
to help you get to that third paradigm but to do it we need to contend with how
do we help you mitigate costs so that we can take away these outdated systems and
most importantly the compliance issue rather its governance risk compliance
data retention we deal with this ongoing and compliance around information
corporations today so rather rising cost fronts you know I
talked about examples of labor costs but also as we have these older systems the
value of someone who knows RPG or COBOL on a mainframe it’s actually at its
highest level in the history because there’s so few people who know how that
how to have those how to do something with that
so that specialized staff and knowledge actually has a very high cost and we’re
always having to maintain a business continuity model here of a backup
recovery HADR strategy so no matter what’s in your data center it all has to
be recoverable so you’re not just paying once for that legacy or zombied system
you’re paying for it twice or three times because of that
dealing with a bit of outdated systems and technologies and I don’t mean
outdated like you don’t use it today I mean outdated like if I consider all
the technologies all these versions of things that are going into service in
the last 12 months or the next 12 months it’s a pretty big chunk of that which
makes up a data center today for an operation system perspective from a
database perspective from an enterprise perspective enterprise application layer
perspective and so on we have components that are causing out
of service issues and out of service is a bad word because out of service means
you’re not getting security patches you can’t get support for it anymore
and you suddenly have a potential exposure and in this time of Internet
there is no lack of that one weakest link being the access point into an
enterprise for someone outside that enterprise to do something not nice from
a compliance perspective there is a dirty word out there retention the monks
that you consider the three L’s or retention laws legal and law suits and
what drives those needs for compliance and retention and also all of these
legacy systems were never really designed from a retention perspective
they were designed to run your business for what they represented at the time
you need to come up with a way that as you are considering how you clean out
these items in your data center how do you also keep in the back of your mind
a fundamental need to deal with a retention because no matter how big your
company is there’s some lawyer or leaked legal person who’s worrying about the
risk of compliance didn’t we see what kind of the the analysts are saying
around where things are going we consider how Gartner is saying that
three times the number of applications need to be decommissioned and
implemented we see where where Forrester is saying we spend an average of 72% of
money just to keep the lights on so how much of your overall budget that you
spent on technology is spent on just that it’s almost like that light bulb in
my son’s bedroom he won’t turn his light off
it’s just during the day this light bulb is burning energy how many those light
bulbs are running in your data center they’re really accomplishing nothing why
is all of this happening well retention requirements dictated
time window for information to be available in retrieval and I see and I
said that out loud because it isn’t just about putting tapes in an Iron Mountain
warehouse somewhere and hoping you never have to pull them again they need to be
somewhat available in retrievable and searchable as I mentioned before we
don’t have maybe just the lack of finding that right old COBOL or our FEG
program or who knows how to deal with what you have running internally but how
about just who’s the subject matter experts who is either retired left laid
off around the data in the system because it isn’t just about how do you
program the system it’s about just having that there I say I’m a golfer so
there’s the kind of the course knowledge my knowledge of the course and knowing
what’s in there out there and we think about how do people consider deal with
this today well the first is well how about if I just back up the data and
throw away the system well if that doesn’t work because you lose the
context or what if I just put the whole thing in a warehouse I actually had a
client tell me once we just turned off the as400 boxed it up
we had Iron Mountain put it away for us and if we need it again we can plug it
back in and turn it on that was their decommissioning plan or
it’s the elephant in the room you keep it running
just nobody talks about it it’s consuming electricity it’s sitting in
the back of the data center and we just don’t talk about and then finally I find
this as being kind of the root cause of how we get here you have duplicate
systems for mergers and acquisitions we work with one client who’s grown because
they’ve acquired 55 entities over the last ten years in the number of legacy
HR systems they have a chart data is amazing
moving the workload to the cloud so I mentioned as you maybe move that on
premise PeopleSoft system into workday system in the cloud there’s a lot of
data that can’t get migrated to workday so you have to keep people topper on
upgrading the large system to new versions so I come from a long history
of working with technologies like sa P in sa P as you
– sa P Hana from the older SAT using traditional relational databases there’s
a lot of data that can’t be moved and some of that data has a retention
requirement around it so you end up having this old version running for the
anticipation that it may one day need to be searched on because of data that
didn’t get my grid and then if anyone there has ever done the math on what a
data center cost per square foot to build and maintain there’s no lack of
data center consolidation going on right now that’s how we got here I don’t mean
to be so draconian and negative about all of it because there is a way forward
we all have our iPhones all these great apps on it we have all these browser
based tools we’re using but there’s there’s opportunities to do more and so
we need to start with a repository that can handle the receipt of that data that
makes sense to maintain and so here at Petri we work with open text info
archive it is an open standard xml-based repository which means it can take in
structured data what I call semi structured data like text strings and so
on or unstructured data like pictures and PDF files and videos and all of that
other unstructured stuff it also has a very low footprint in terms of what is
required to administer it it has all of the same abilities to index and search
on like a traditional relational database what I like about it for
archive as well is that it brings all of these in browser viewers with it so if
you want to view a PDF file review a word doc or view a JPEG all these
viewers exist inside info archive you don’t need them as a plug-in in your
browser it also has very deep you know the requirements around security and
encryption so you can encrypt and mask data so masking is maybe Gretchen should
be able to see all of the HR data around everyone that’s in the system including
the social security number and maybe me is Tyler I’m not allowed to see the
Associated number see master that you hide that so you have all of this
automation of encryption masking what’s also important to this entire exercise
we’re going to start discussing too is this idea of chain of custody
if I have data in this old-ass 400 system and I move it over here to this
info archive system how do I know it all got moved
that’s called chain of custody that means it is compliant that means a judge
would say yes this is an accurate depiction of what exists in that legacy
system and then also no matter what you still want to have just as you would
have ever wanted across the rest of enterprise a data retention capability
do I keep it for five years seven years ten years and what I especially like
about info archives compared to all the other technologies in data center which
are price based on the cores or based on that may be the named users is it’s
priced on data size so it allows you to actually take systems offline
move the data into an environment that actually makes sense based on what you
need to keep and retain now with info archive we have our con which is from
platform 3 in our con comes with five primary functional components really
empower this act of retirement and decommission the first one is extracting
connectors the ability to connect the databases ability to connect to legacy
mainframe and adds 400 systems the ability to detect enterprise content
systems and also we have encapsulated in p3 connector bundles so that it isn’t
just about for instance within PeopleSoft it isn’t just connecting to
PeopleSoft but it’s also within info archive pre
creating those canned reports that you had inside PeopleSoft as well so it
isn’t just the data connection but it’s also the outputs Automation analytics so
we’ve talked a lot about if you have the scene dollar zombie systems odds are the
person around that subject matter expert doesn’t exist anymore so how do you how
do you reverse engineer that entity relationship and those in that
cardinality of the data because you need to not just move the data from a Content
perspective but you need to move from a context perspective these tracks and
services so do you need to trickle feed moving data out of that old system or do
you need to just load it out of there so it’s really a function of what do you
have for network bandwidth to do this and so we have all the extraction
services to move it from that legacy system directly in it or archive since
we have all of this knowledge already because we’ve connected and analyzed
that legacy system we also have on the other side of it the
ability to auto configure info archive to accept the data coming from that
legacy system and to also pre configure the data relationships and to also
pre-configure like I said the canned reports and the query strings but if you
need custom query strings or you have special requirements and how you want to
search on and report out of this legacy system over at for archive we come with
a query wizard builder so that if it’s you click through and because we already
understand the data we can help you easily create screens out the other side
in info archive as well now I’ve been doing a lot of talking showing you a lot
of screens and and I feel that it is worth at a demonstration so in this
demonstration of Arkana than full archive we’re going to connect to a
legacy JB a financial systems on an as/400 I figured I might as well find
the least attractive option possible to show as a demonstration now mind you KVA
is an erp system out of england that really stopped existing 20 years ago so
it’s about a 25 year old application we are analyzing the system because lo and
behold no one really knows what the dated relationships are inside jba and
we need to make sure that when we we extract that data out and put it in
their ball archive those data relationships move with it we’re gonna
auto setup info archive from a data map in a relationship we’re going to execute
the move and again we’re only gonna do this on a subset of the data we’re gonna
show how our con can quickly create query screens and reports says some what
math – what you saw on the first s4 on your screen in the first place and then
we’ll show you the results in info archive and we’ll and will this entire
demonstration will take about seven minutes it’s worth noting kind of like a
baking show you have people who in the kitchen they kind of make the batter and
then they pour it into the end of the pan and then 30 seconds later they kind
of go to the oven and pull one out there’s a bit of that that goes on in
this demo because we don’t need to be showing you how we log on to things we
want to show you kind of the real meat of this exercise so here is this very
lovely JB a screen on as/400 it’s an account it’s an ERP system this is the
general ledger environment and you can see here where we have the
accounts information then we can hit a function key to go down into the
detailed review of this data as well we’re going to move this out of GBA and
AFVs 400 and into info archive so as we log on to our con within our con it has
a full dashboard on the front end so that you can manage connect analyze and
move information and it also comes with a pre-configured set of specific tools
for info archive around setting up maintaining chain of custody and clear
wizard building so we’re going to already be connect to the s400 but now
we’re going to go in and analyze this jba system so as we bear with me here as
I click over as we go into the analysis side of the jba system i’m able to
analyze it based on just your scheme of you I’m able to do a column matching I’m
able to do store procedure analysis as well and then out the other end I then
create these these relationships what am i joined calls or mighty columns for you
database people and how do I create those relationships whether they’re a
single key or a compound key I don’t make sure sure those happen I can also
go in and do deep interrogation of the data if I don’t have that information
and do some serious data crawling out the other end of it I’m able to also
auto generate not just the DDL statements but also an entity
relationship diagram that you can use internally to to communicate what’s
going on now I’m going to go into the Archon Automator to set up and take this
knowledge and set up info archive with the knowledge I’ve just extracted out of
this JB as400 system I may also able to go in there and just create de facto
screens search screens out the other end and and decide what happens and then hit
schedule and bear with me here so as we’re as we’re moving from our con
into info archive I think this is very important because again but what makes
this especially interesting is how we’re able to behind the scenes create things
this quickly in info archive from our so now you see here we are within info
archive we can go into the JB a you know data set here and you can see where the
some of everything that was moved over is created not just from a data and
column perspective from a context perspective as well so now we are going
to go in and actually do an extraction from this as/400 system and move it over
to the info archive environment we’re able to do this also maintaining the
chain of custody because within that chain of custody we have to maintain and
move with the system the the row counts we can also go in and choose to what
degree we want to do this from a parallel processing perspective because
this is a multi-threaded extraction engine so the default is 3 I can I can
scale this up and down depending on what my network administer lets me have it
also appreciates the International Miss of all of this it’s worth noting this
jba system existed running in Europe and we had European date and time formats
and we want to move it to more of an ISO standard on the other end while this is
also happening we can you see here where I’m doing the record cows because again
if you’re going to move things and in for archive you have to maintain this
chain fest so I’m going to turn this on and here we go always gotta be able to
check things we’re going to schedule it we’re gonna run it and because this is
running next to the add 400 environment it’s able to connect to it extract it it
also creates automatically a chain of custom reports you could have your legal
people so they understand when and what happened from a records management
perspective so now we are going to now we’re going to go in and do an ad hoc
query builder so we move the data and I want to create a certain way of
searching for data inside info archive since I already know the data in the
data relationships it allows me to build very easily taking these data
relationships picking the data fields that I want from a table
and column perspective and allow me to build out some pretty elaborate screens
automatically with this wizard base viewer please note that also if fields
are encrypted though it’ll be indicated as well so even if someone is building
query screens it takes into consideration their access as well so
now we are over here so we built in an Archon it gets implemented inside info
archive and now we can see here where we can query on an account number and a
ledger number just like we did in the s400 environment to look at the details
enter an account number a ledger number and again this was a very simplistic
version you can get very elaborate and how you how you create things inside
info archive from a UI experience it’s also worth noting if you look to the
right there’s also a retention policy and a retention length you can have the
data that gets moved over and have those added metadata fields be inherited by
the data depending on your retention policy so the things like retention and
how long data needs to be around is automatically taken care because you
don’t want to have everything that you retire coming over have to be managed
perpetually because then you get inconsistency in your retention policy
so that is our con and info archive and I think it was about seven minutes but
again we started with a legacy as 400 system behind the scenes I was already
pre connected to us I was analyzing the environment around the tables that were
important to me I auto set up info archive I executed the move I quickly
created a query screen and report and then showed you the results in info
archives again very seamless here’s an example of and sometimes I like to show
either really simple examples or really big examples and this is an example that
kind of comes from the approach of if you can do it here you can do it
anywhere a very large global bank very large because it’s not it’s 2.5 trillion
with a T under management with a large number of employees and IT staff and
they were discovering that they were having four primary issues with a sum
total of their IT spent security and risk exposure percentage of
budget to maintain they were also looking to bring down the number of data
centers they have and they and most importantly as you’re doing this it’s
not that the IT budget is going down per se but they want to shift more money
away from what they’re paying to maintain old to do what they want to do
this so they created and it’s highlighted
here this is the exact name they call it’s the global technology decommission
Factory is their center of excellence that exists in a few countries around
the world because you think about it as you’re also contending with looking to
turn something off there’s a lot of emotion and politics that come into play
so a lot of what’s coming into this isn’t just people command is the act of
decommissioning but also the politics and negotiation decommission so they
invested in an enterprise-wide relationship with open text info archive
is their system of record and what they really liked about that was the the
automated retention that I kind of highlighted very quickly in the
demonstration but also all the different data types that it can handle and how
easy it was and then also our con as a way to create this repeatable and
evolving capability this is what I showed you was one example against the
as400 internally what if you have multiple as400 running multiple versions
of kba every time you do when you learn
something more about what you should keep what you should throw away best
practices so our con is a learning engine and as a product has a roadmap of
evolution as well help better feed info archive the results here are some
examples of clients of platform 3 has worked with over the years to create or
to help them turn off a couple or hundreds of thousands of applications in
their enterprise and you think about no matter what industry you are in you have
a whole mix of organizations here you see banking you see insurance you see
technology you see medical device you see discrete manufacturing of lawn
mowers and what’s interesting about all of this is everyone has retention
requirements I don’t care what industry you’re in there is expectation you have
to keep certain data around for a certain amount of time in that data in
2019 no matter how old that system is bill
brings a mix of structured and unstructured mess to it all and you also
have dozens if not more than dozens of types of applications
what’s the database what’s the application is it a is it a Lotus Notes
system is it uh we have clients who were asking us to help them be commissioned
cloud-based systems where they need to me they they need to extract the data
out of the cloud-based system but then they want to shut it off so this isn’t
also just about a 25 year old as/400 this could be about a couple year old
box or Dropbox environment and you need to find the right way to extract
everything that’s in there and turn those licenses off so they can move
forward to something else so who is platform tree solutions all we do is the
business all day long itself organizations clean up their
environments we have over a hundred employees worldwide who focus on nothing
but this employees who are experts in the area of things like PeopleSoft or sa
P or Lotus Notes or content management systems or traditional database systems
we are all over the world from a geo perspective and we we have very deep
relationships and understanding of things because again the point of the
webcast is how do we reduce the risk to increase the value so we looked at
reduce the risk with the some of our experiences the arcon tool that we have
developed at the other end that is that that allows you to have this repeatable
best practice around how you do things and out the other end have this platform
products and professionals to help you start your journey and to create your
own little version of maybe a center of excellence around the act of cleaning up
your data center if I can end come near the end of this webcast so I opened this
we now kind of have this new normal I like to call we still have new systems
coming in the door we have the young and adolescents so I look at new turns into
you’re kind of piloting things which is young adolescent is well let’s use
departmentally and then prime means well departmental is only turn to enterprise
and then you have things that mature and you have to get the respect maturity
because maturity means that there is some business
logic in there that maybe these other systems but do be very quickly find a
way to turn mature into something you retire and oh by the way in the
retirement effort I’m not saying you take the same five terabytes that exist
on the old system and it turns into five terabytes and info archives there could
be a lot of what needs to get couraged out of that as well there can be a lot
that doesn’t need to get moved over so it’s really about this the the
retirement function isn’t just about lifting and shifting and info archive
it’s about doing monocle analysis understand what should be text so to
close kind of next steps for everyone on the the webcast here is is firstly it’s
worth noting our con is a product is resold by open text around the world so
it is available no matter where you are in the world for you to contact open
text as a as a they means to obtain our con you can also ask us to have a deeper
conversation right kind of what the combination of our continent bark eyes
can do for you so if info archive is the system of record and our con as a system
of engagement to help drive decommissioning we can also very much
come in and help you see what the art of the possible looks like to do proof of
concept because we find that in a lot of cases people need to see it work against
an example of what they have in their data center but we also find that this
entire conversation really requires a bit of a cultural shift as well a
cultural shift it starts with making sure that everyone in the organization
realizes that by doing this you are reducing risk you are reducing costs and
people need to see in some cases what that business case looks like so we can
and platform three because this is all we do all day long help come in and help
you build your business case to understand how to shift your culture to
clean these things up so with that being said here’s also other ways you can
contact us you can check us out on platform three solutions comm you can
contact us at this phone number this is the phone number to our main office here
in Minnesota in the United States we can also give you contact phone numbers for
other places in the world and you can also just email us the sales at platform
three solutions

Robin Kshlerin

RELATED ARTICLES
LEAVE A COMMENT