BLOG | The Modern Fund Office

The Modern Fund Office Blog



Given today’s perfect storm of risk, compliance and regulatory overload, The Modern Fund Office offers administrators, benefit funds and unions our perspectives on how your technology platforms, processes, people and data can create a rock solid foundation for your operation. 



By Michael Goldberger September 8, 2021
Agility is generally considered a virtue. To that end, the ability to work independently of your vendors - meaning you don’t have to depend on your vendor for ALL situations that require access to your data - gives you greater agility. In practice, that sort of independence is the result of two factors: Degree of access to all your data “allowed” by your vendor Level of knowledge and skill to do anything with that access As we described in last month’s post, access to your data can come in many forms. You may be able to get at your data through reports, queries and other vendor provided tools which are all forms of “allowed access.” But, it is important to remember that the fund office is the “owner” and “custodian” of all underlying data and that the vendor provided tools may or may not provide access to everything that constitutes the complete data set. Even if you feel this level of access is not necessary (and perhaps you wouldn’t know what to do with it anyway), it is an important consideration that may provide options in unanticipated situations. You can think of it as a form of insurance against something going wrong with your vendor. I am not talking about database backups here – also critical – but rather about having access to and an understanding of the complete data set that serves as the foundation for your administration systems. In some cases, if you ask your vendor for a set of data, they are likely to say “Sure, what do you need? We’ll put that in a file for you.” While that is certainly a form of access, unless or until you have set up a process where you’ve defined a request that covers all data elements, and you have a scheduled delivery of those files (e.g., once/month), then you haven’t achieved what we would call data independence. And that begs the question “How do I know what to ask for?” The answer depends, but for most fund offices this would at a minimum include: All the individuals in the database with their unique system identifiers, including all available demographic information (name, address, dates of birth, marriage, death, etc.) All the contributing employers in the database with their unique system identifiers The full history of all contributions transactions, with appropriate identifiers that link to a person and an employer The full history of all benefit payments with appropriate identifiers that link to a person The full history of all benefit applications with appropriate identifiers that link to a person The full history of all benefit credits (e.g. pension credits) for each person whether or not they were ever vested The relationships between members, dependents and beneficiaries (who is related to who) For health and welfare funds – the full history of health eligibility for all persons in the database All configuration and setup data (e.g. list of code names and values, tables of constants used within formulas, etc.) If you don’t have easy access to your complete data set (which would include these elements), it may be time to work with your vendor to set it up. Equally important to “access” are the knowledge and skills to use the data. The only way to know that you really have “everything” is if you can decode details. The knowledge component implies that even if it is not formally documented, you understand the data model that is used to support and organize your data. The skills component means that you have the ability (if necessary) to assemble the pieces (data elements) and make sense of them. As we discussed in a previous post, you can probably do a lot using Excel to extract value from your data if you have mastery of the underlying components. Given what I have just described, I will close with a few questions to ask and answer when assessing your level of data independence from your vendor(s): Do you have a clear understanding of how your vendor stores and manages your data? Where is it physically, what sort of database is used and how large is the entire data set? If you have a need for a new report or extract, can you get it yourself or do you need to ask your vendor to do it for you? If you are dependent on your vendor, how long does it take to get that turned around? Does anyone on your team have a full understanding of the underlying data model? What are the base tables and do you know how they are linked together? Can you create a diagram? If you can receive extracts of data, do you have a push or pull environment? Push: the vendor sends you a file when they can, or according to a pre-defined schedule. Pull: you can grab up to date data as you need it. If you can answer all these questions AND are satisfied with your answers, then you can safely assume you have sufficient data independence, which is a key factor in your ability to be agile and also contributes to moderating any risk related to your data. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You have independent access to your data Everyone on your team is cognizant of the value of good data and the long-term costs of sloppy data You leverage your data to support operations AND to support long term decisions
By Michael Goldberger May 19, 2021
At this point in our data series, you might be wondering if you will ever get to hear about using the data you have been so carefully maintaining. Well, you are in luck: in this post I want to begin the conversation on using your data to provide insights, drive decisions and tune your business processes. Believe it or not, the data that lives inside your benefits administration system may not be as accessible and useable as you would think. Sometimes, the simple act of getting the data becomes a project in itself, burdened by complex reporting tools or constrained access mechanisms. But not to worry, in most cases, you (or your IT experts) should be able to grab your data and put it into a tool that you know how to use – and the Lingua franca here is typically Microsoft Excel. Excel is a powerful medium for sifting, sorting, reformatting, charting and generally putting data into a form that answers your questions or tells you a story. For this reason, I always recommend having access to one or two “super users” – either on your internal team or on staff at a vendor with whom you have a close relationship. Saying “I can’t – or my team can’t” when it comes to Excel is no longer an acceptable answer if you work in this industry (and if somehow this is where your fund office lands, there are a variety of free online resources on getting your team up to speed). Even if your core systems include standard reports or reporting tools, having the capability to use Excel as an additional way to analyze and leverage your data will prove valuable in the long run. We find that the first batch of data or reports you generate typically spawns more questions than answers, so rapid iterations are often needed to get to those answers. This is almost always easier with Excel vs. reporting tools embedded in core systems. Ultimately, you may find that there are certain data or reports that you will want to have available as “standard” in your core system and in this case, iterating in Excel can also help you “define the requirement(s)” for that information. As a side note, if for some reason you cannot get your data out of your core system(s) and/or you cannot put your data into a spreadsheet, it is a leading indicator that it is time to make some changes. Understanding your options for getting at the data will help you determine whether or not you need external assistance or additional expertise so I have outlined the 5 main approaches below: Reports: Historically, reports have been hard coded into systems with hard to change definitions of the data set and the page formatting. The nice thing about these types of reports is that they are typically easy to run and print in a format that is suitable for framing. Unfortunately, this type of formatted report is not so suitable for data analysis. If your system only allows you to output reports to a printer or a pdf file - that is a limitation in terms of accessing your data. Exports: Exports usually allow a user to take the information that is shown in the user interface, and save that data as a file (typically Excel or csv format) which can be opened in another program. Exports are nice in that they allow you to save data, but exports may be limited because you only get the data shown on the screen. Queries: Some systems have a query tool that lets users define a data set (based on a choice of fields to include and criteria for filtering those fields). The result of a query can usually be exported to an easy to use file … essentially an advanced form of an export. The challenge with queries is that they often require a degree of expertise with the particular tools and syntax of your vendor. Database Access: This is the most powerful - and most feared - approach to getting at your data. In the world of open systems, it is not unusual to have direct access to the data tables that form the core of your system. However, with an appropriate set of tools (and in fact, Excel is one of those tools) and someone that knows how to use them, you can create your own extracts that utilize the raw data in your system. Asking about direct access to the database, or even documentation of the database is a good test of how “open” your vendor really is to this method. Data Mashups: A relatively new, but potentially powerful toolkit that happens to live in Excel! Mashups are an approach to data that lets you take data from multiple systems and combine them together with powerful results. For example, maybe you have separate data sources for health benefits vs retirement benefits but you would like to compare names and addresses across the 2 systems - that would require a mashup. Once you have your chosen method(s) for accessing data and can get it into a useable format, you will want to make it easily accessible for anyone who can benefit from it. Newly created data sets or reports should be stored in a shared file location so that access can be set up for “self-service” – essentially instant access with zero waiting period. In particular, your users should not have to rely on printing, copying and pasting or rekeying to get a view or report that is useful. If that is happening then something about your data isn’t working and you should look for the root cause. For more about how to unlock the information in your core system(s) through better data access or what it could look like for your fund office, drop me a line and I am happy to chat. Happy reporting! 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Michael Goldberger November 9, 2020
Today’s post in our series on data focuses on the importance of having the tools and processes in place for continually identifying and correcting any gaps or flaws so that your data is always accurate. At this point in our series, you know what data you have, where it comes from and where it lives. You can also easily figure out what you don’t have but do need, which means you know which data needs to be corrected and which gaps filled in. Since your data is always changing (new data is entered, existing data is updated), no one’s data is ever perfect at all times. Data is like a river; it’s always flowing. Just because it was all correct yesterday doesn’t mean it will be correct tomorrow. Compounding this data fluidity is the environment of the fund office: for many data elements, data collection and data entry often end up being manual processes, especially for member information such as birthdates, marital status and life events. And by definition, even when people are being careful, manually entered data is likely to have an error rate of 1 -3 %. While some systems are quite rigorous about validating data before it is entered, others are much less so. It's often a balancing act between imposing restrictions and controls on data entry to optimize inbound data quality versus allowing data entry to be fast and easy with few if any validations. This last point is important because onerous validations often drive creative methods for working around the process. A good example of this would be individuals fabricating a marriage date if it is not known in order to get past the validation that requires a date (even if it is unknown) to create the member record. Unfortunately, once that has been done, it can be very difficult to find the “fake” dates within the data, which can lead to unexpected problems down the road. Our approach is a little bit different and is based on creating a regular and rigorous “exception detection reporting & correction process.” This is a proactive process that should be incorporated into the daily or weekly processes and all but eliminates the challenge that arises in waiting for a problem to happen and then going back to troubleshoot the data. Essentially, the core of this approach is to design and regularly run data exception reports AFTER the data is entered (vs. a VALIDATION process which occurs before or during data entry). An example of such a report would be one that surfaces participants who are married but where the marriage date is missing. Another might surface people who are working but don’t have a date of birth (DOB) or where the DOB is unrealistic (i.e. the individual would be 122 years old). It's important to remember that even if your data is determined to be 99% good, if you have 1,000 people you still have 10 errors which can be significant when it comes to providing individuals their benefits in a timely and accurate manner. Hence, the process is never finished and is ongoing - you’re always creating errors, surfacing errors and resolving errors. It is a mistake to think that data entry, and therefore data, is always perfect but if you have a way to continually polish it, it will always shine. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Michael Goldberger October 21, 2020
Now that it is fall and we have all realized some type of new normal , I want to go back to our blog series on the importance of data quality for unions, funds and administrators. Now, more than ever, our new, often virtual environment has a dependency on accurate, current data. I have been gradually tackling each item in MIDIOR’s 10 step data quality program and will address the fifth in this post. It has been a while, so here is a reminder of those details: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions In the last two posts, I talked about the importance of establishing a “system of record” for each piece of data and a commitment to capturing data once and only once (even though it is likely to be used in multiple places). Following on from there, now that you know what data you have, how you get it and where it lives, you can easily figure out what you don’t have but do need. In other words, where are your data gaps that could mess with the accuracy of your systems? In the context of funds and administrators, the data gaps are usually related to information needed to completely implement your business rules. These could be rules related to eligibility, contribution rates, benefit calculations, or maybe something as simple as who gets the monthly newsletter. If you don’t have any gaps, you (or your technical staff) will have a much easier time implementing the rules. In the context of a fund office, the data gaps are usually related to information needed to completely implement your business rules. In order to determine if you have any gaps, start by defining all of the data inputs required to calculate a benefit, issue a disbursement, report on an activity or whatever else you may need to do according to the plan rules. Some of the rules are described in a plan’s SPDs and some are operational rules that have evolved over time and have become standard practice. In any case, we like to think of those business rules as a set of algorithms or equations, with defined inputs (data) and outputs (actions). If (and that’s a big if), you have clearly defined the algorithms to match your rules, then you can list all your required inputs and compare them to what you have available and define all of the gaps. Because systems are not people (who can often fill in the data gaps), you will need to figure out how to fill in all of the missing data and organize it in a way that lets you perform any calculation, and repeat it over and over, before you can consider your data set complete. The key point is to step through each business rule and ask yourself what piece of information is needed to complete that step and write that all down. I've included two simple examples below.
By Susan Loconto Penta April 28, 2020
The impacts of the coronavirus are significant, regardless of location or vocation, company or community, age or gender. We find ourselves marveling at our clients as they do their best to keep delivering on their promises to customers, members and stakeholders, even as many are considered “non-essential.” At MIDIOR, the nature of our work and the location of our clients has necessitated virtual work for quite some time. We also find ourselves in the fortunate position of having made recent investments in the processes, platforms and training that enable a truly remote work environment. That said, we could not have imagined our current situation, where we are testing the limits of what we can do every day. For our union and fund office clients, a shift to remote work can be particularly challenging – because some work cannot be done remotely (i.e. it is difficult to swing a hammer virtually) and because the norm for benefit services has centered on high touch, personalized, in-person work. Most, if not all, of our clients have been pushed further along the “remote work” and “mobile access” journey and now, more than ever, the value of quality member benefit systems, administration platforms, strong IT teams and mobile applications is visible. Today’s question is not “if” teams can work remotely and “how” members can access their benefit information at any time and from anywhere, but “when.” So, irrespective of where you are on your journey, now is a good time to sit down with your leadership teams and discuss your current situation and what a new normal will look like. The following is a quick list of questions to consider as you dialogue with your teams about making remote work and mobile access a reality. I hope it is helpful. Which jobs can be done while working remotely? For those that can’t, is it really impossible to do the work remotely or is it something else (e.g. people aren’t trained, guidelines are not in place, platforms do not exist or even unconscious bias against remote work for particular jobs)? Are the basic technology tools in place to make this work? This includes remote access via VPN, or remote or virtual desktops, internal instant messaging platforms like Slack and the ability to conduct video meetings (e.g. Microsoft Teams, Skype, Zoom, Lifesize). How much security is required and how much training do your teams need? Do you have guidelines and clear expectations about what it means to work remotely? Timing can get blurry when you are at home in terms of when you are at work and when you are not. Define ways for employees to “check in” and “check out” along with a new roster of team meetings. Can you service members remotely? There is no (technical) reason your phone and email systems can’t work 100% as well when your staff is distributed. Turning things off is not the answer. If anything, this is a time when members need more service and immediate answers. This may require restructuring workflows in the short term but doing this now will give you a leg up in the future. Do you have a member portal? Is it a real mobile app with sufficient data to answer member’s basic questions? If not, think about how to enable smart phone access to your benefit systems quickly and put a plan in place for a permanent solution. It appears that we all need to visualize a future where remote work, at least for some, and self-service everything will be the norm. We can’t turn the clock back, but we can set ourselves up for a better future. See what’s hard now and incorporate an approach to moving through any obstacles and make sure your technology roadmap accounts for a future that includes remote work and member self-service. Making it work is not trivial but it is not impossible either. I hope this is helpful and we always want to hear from you if you have ideas on how we can adapt our services to be helpful in these complicated times. And lastly, for anyone reading this that has someone in their circle that is on the front lines, please say an extra thanks from us. And for everyone else holding it together in the background, no matter how, remember we all have a role to play in our community’s recovery.
By Michael Goldberger February 26, 2020
Welcome to 2020! After a bit of a hiatus for the holidays, I am picking up this blog series on data quality for unions and fund offices with this post. I started the series by talking about the importance of “getting the data right” in your benefits administration system, including a grading rubric to assess data excellence. Since then, I have unpacked the first three (3) elements of our 10-step, comprehensive data quality program (listed again below) and will tackle the fourth in this post: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions In my last post, I talked about identifying systems of record for each data element and creating some rules on how to use them in order to resolve data conflicts. Once you can identify your master data sources, you need to be disciplined about capturing the data that goes into them (and any subsequent changes) just once, even though it is probably used in multiple places. To accomplish this, you must link the many places a particular data element is used (e.g. reports) back to that single trusted, master source for that data. Maybe this seems obvious but there is a catch. Many fund offices and unions have systems that were built on top of other systems, and business processes that are disconnected from each other so even where intentions are right, there are often copies of the same information residing in multiple places. For example, think about creating a list of contributing employers. Let's say one of the employers on that list had a name change. How many places beyond your system of record might need to be updated in order to be sure you always use the new name (e.g. on invoices or reports)? If there is more than one, this post is for you. To avoid this problem, you want to “normalize” your data. In a fully normalized system, any piece of data that is used in multiple places, is stored independently, with unique identifiers. Let’s say the employer “Bill’s Sprockets” changes their name to “Bill and Daughter's Sprockets.” In this case, you want to be sure that your “system of record” reflects the new name and anywhere that the employer name is used references that source. This ensures you don’t (1) continue using the old name by accident, (2) lose the connection between information associated with the old name and the new name, or (3) end up with confusion about how many companies really exist. This will sound like a technical detail, but there is a very important key to having a normalized data set from which you can create such a list – you need a unique identifier (ID) for each employer that never changes. Why is this so important? Because once you establish the Employer ID, any other tool or report that needs information about an employer can reference the Employer ID, rather than something else that might change over time (like the Employer’s name). That unique identifier might be based on something real (like a Tax ID number) or it might be created manually by you or generated by one of your systems. The important points in this case, which also apply to any situation where normalized data is critical, are that: Every Employer has a unique ID Every Employer has only one ID Once it’s been assigned, the Employer ID never changes Every ID is only used only once, and for only one Employer You have at least one piece of information for each employer (besides the ID) For example, a basic mail-merge list, made from data that is not normalized, might look like this:
By Michael Goldberger November 14, 2019
This is our 4th post in a series on data quality for unions and fund offices. During the summer, we wrote about getting the data right in your benefits administration system, including a grading rubric to help you assess your data excellence. Since then, I have unpacked the first two elements of our 10-step data quality program in subsequent posts. Today, I will tackle the third which is focused on resolving data conflicts. In case you don’t have the 10 steps handy, they are: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your sources You have an approach for resolving any conflicts between sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of ”sloppy data” You leverage your data to support operations AND to support long term decisions At this point, you know what your data is, where it comes from and where the problems are likely to be hiding. The next step is to to tackle what to do about those data problems. There are several different approaches to this topic (and if you want, you can spend a few years studying TQM or Six Sigma to get deep into the theory of quality control processes). In the manufacturing world, the goal is to keep defects out of your system by finding and removing them as early in the process as possible. The same concept applies to data – even in the fund office and union environment. It is much cheaper to keep the bad data out, or make corrections, at the point where data goes into the system, than it is to find and correct issues somewhere down the line. Sometimes this might feel like an unnecessary burden – i.e. checking everything 2 or 3 times as it goes into the system or requiring a complete record (e.g. date of birth) before adding a new member. But it’s much easier to correct a member’s name when they start working (because you noticed a conflict) than it is to deal with name challenges as you process a death benefit. Start by going back through your data inventory and confirming your decision about the “system of record” for each element. In our example in the 2nd post , a member’s UnionID is assigned by the Membership system, so that would be the “system of record” for that data element. Do this for each data element. In our example: Member Name – Enrollment form Member SSN - Employer Member Union ID – Membership system Hours Worked by Date - Employer Member Dues Status – Membership system Member DoB – Enrollment form Member Marital Status – Enrollment form Now that you have reconfirmed the “systems of record” for each piece of data, you can implement a business rule that tells you what to do in the case of a conflict. Add the rule to each conflict on your list. For example: If name (from employer) is different than name (from enrollment form) then use the name from the enrollment form If Union ID (from employer) is different than Union ID (from membership system) then use the ID from the membership system. The more of these rules you can establish up front, the easier it will be to maintain clean data down the line. It is important to note that the payoff for all of your quality control efforts may not be visible because you are solving future problems before they happen. If you’re not sure about the cost benefit of putting “high fences” around your data, a little bit of background reading on the value of quality in your data should convince you. The consensus from the manufacturing world is that high quality processes are almost always lower cost than low cost processes. In my next post, I will discuss the importance of capturing data once and using it in many places. Until then, best of luck resolving your data conflicts.
By Michael Goldberger October 30, 2019
In our last post , we unpacked the first element of our 10-step, comprehensive data quality program. In this example, we will tackle the second step. As a reminder: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your sources You have an approach for resolving any conflicts between sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of ”sloppy data” You leverage your data to support operations AND to support long term decisions If you completed the data inventory described in our last data post, it is likely that you have already found some potential conflicts. The more you know about the potential conflicts up front, the more you can do to maintain the integrity of your internal data. Completing step #1 is a prerequisite here, but once you have an inventory of your data elements and sources, you can use the list to identify potential conflicts. For example, if you add new members based on a list from employers, but get their Union ID based on a list from a Union office – do you compare and reconcile any differences in the member’s name? Do you verify that the “new” Union number doesn’t already exist in your system? Some of the most nefarious problems are created by the “duplicate person” syndrome – i.e. when 2 separate records are referring to the same person. Even worse, is the “duplicate ID” syndrome – when 1 record is tied to 2 different people. These are the data gremlins that can cost you dearly down the road. The expense to resolve any downstream issues compounds over time the longer they linger. Let’s dive into the specific example of the process for identifying conflicts in action. Start by picking a data element such as “member name.” This is a piece of data that you could receive from multiple places and should already be on your list of potential conflicts. Start by creating a list of the data sources for the data element you selected (i.e Member Name) along with all of the pieces of related data that come from that source. Use a spreadsheet if possible. A document will also work but a spreadsheet will be more helpful later. For example: Data Source 1 – Employer Member Name Member SSN Member Union ID Hours Worked by Date Data Source 2 – Union Membership system Member Name Member SSN Member Union ID Member Dues Status Member DoB Data Source 3 – Member Enrollment Form Member Name Member SSN Member Union ID Member DoB Member marital status From this example, you may learn that: Member name, Member SSN, and Member Union ID have up to 3 sources (and there are not guarantees that all three fields match) There are 2 “identifiers” (SSN and Union ID) which should both be unique and consistent Member DoB may have 2 sources that are inconsistent Do this for each data source and element until you have identified all of the potential conflicts that could result from a data element that has more than one source. Make a list of those conflicts in a separate worksheet in the Excel workbook that contains your data inventory. We'll discuss resolving conflicts in our next post, but remember that capturing the sources of potential conflicts will make the resolution step much simpler.
By Michael Goldberger October 3, 2019
Earlier this summer, we wrote about “getting the data right” in your benefits administration system, including a grading rubric to help you assess your data excellence. There are several concepts to unpack, and I will start doing that with this post. In our earlier post , we emphasized that data is a root cause of many fund office challenges (from business operational complexity to the expense of technology deployments to high quality member services). Getting your data right is the path to unlocking many internal logjams and obstacles. In order to help fund administrators and unions achieve data excellence, MIDIOR has developed a comprehensive, 10-step data quality program that correlates to the 10 elements in our data grading rubric. We will tackle each element in turn in the coming weeks, starting with the first (in bold below). You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your sources You have an approach for resolving any conflicts between sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of ”sloppy data” You leverage your data to support operations AND to support long term decisions Maybe this seems obvious, but the first place to start is to be sure you have an accurate list of systems and sources for your data. A seemingly simple exercise is to create your own data inventory (aka, a list), including the “provenance” of every piece of data. To get you started, figure out where the data about your members lives (including the many details about each member); do the same thing for your dependent and beneficiary data, your list of employers (maybe with multiple contacts per employer), as well as your work history or contributions data. This is a good start. When you start digging into the details - by looking at each of these “lists” and trying to add data about your data - you will uncover some primary sources of data challenges. Even if you can’t “fix” your data sources, it’s always better to be aware of them. Start by creating a spreadsheet of all of your data elements if possible. A document will also work but a spreadsheet will be more helpful later. Then log the potential systems (sources) for each. You can start by asking yourself multiple questions about each data element. An easy place to start is to consider the typical data associated with your basic list of members. How do new members get added? Are they added manually by fund office staff? If so, where do they get their information from? Does it come from the members via a paper form? Do you receive new member lists from employers as well? What about from local union offices? Digging further into the details – do you get member’s names along with their address from the same source? Do you require an SSN or other identifier, like a Union ID for every member? Do those come from different places? For every single unique piece of data, ask these questions and log ALL of the potential sources along with the primary “system of record” for that data if you know it. Give yourself 100 points if you already have such an inventory. Stay tuned for MIDIOR's next post on identifying Data Conflicts and Inconsistencies.
By Michael Goldberger July 10, 2019
At MIDIOR, we are a broken record as it relates to data for our union and fund office clients. Whether you are focused on organizing, member services or anything in between, you need accurate systems and accurate systems are, of course, “all about the data.” If you get the data right, everything else is easier. How do you measure how well you’re doing? On a scale of 1 to 10, are you an 8 or a 2? We recommend a grading rubric based on multiple factors that are indicative of quality, integrity, structure, and completeness. For example, we assess whether: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your sources You have an approach for resolving any conflicts between sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions As a side note, a few things that are NOT part of this grading rubric are: That you’re using any particular brand or type of database or database product That you leverage “the cloud,” “big data”, “deep learning” or any other buzzwords of the day That you know anything about referential integrity, primary keys, or row locking and commit logic Strong member recruitment, high member retention and great member service in the future depend on your ability to keep your data accurate and current today. If you are not sure where you stand, start by asking yourself about the items listed above. When you are through, you will have a better sense of where your gaps are.
SHOW MORE
Share by: