Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

MIT Touchstone Project Planning

 

Goals:

Transition the IdPs to Shibboleth 2.1.4 release.

Phase One: Transition core MIT IdPs (idp.mit.edu)

 

...


Hardware

...

    1.  Idp1 and idp2 are running on RHEL3 physical machines.

...

    1. NIST has also provided idp2-dev, which is also a RHEL3 machine.

...

    1. Bob has been using foonalagoona which is provided by OPS/AMIT. This is not a RHEL3 machine.

...

    1. To complete the transition new RHEL5 VMs will be requested from NIST:

...

      1. 1 dev machine

...

      1. 2 staging machines

...

      1. 2 production machines

...

      1. Configuration:

      ...

            1. minimum RAM 2GB, suggest 4GB

      ...

            1. at least 10Gb disk, 7200 RPM

      ...

            1. Switch recommendation for CPUs on a physical machine is 4 cores, each running at 2GHz. It has been noted that IdPs tend to be CPU bound, not disk io or network bandwidth intensive.

      e.       Once the transition to the new IdPs has been completed the following physical machines will no longer be needed by Touchstone:

      ...

                                                                  iii.      Foonalagoona.mit.edu (OPS/AMIT)

      2.      Develop login page(s) that support multiple mechanisms, without using Stanford WebAuth.

      a.       Authentication mechanisms:

      ...

      a.       There are currently two entityIDs or proiderIDs that are used to describe the core MIT IdPs. Within InCommon our entityID is urn:mace:incommon:mit.edu Within the campus federation our entityID is *https://idp-mit-edu.ezproxy.canberra.edu.au/shibboleth*Image Removed. A single entityID can be used in two different federations. However, when doing so it is important to keep the data identical in the two different metadata files. 


      https://spaces.internet2.edu/display/InCCollaborate/IdP+entityID+Shift+to+URLs+--+FAQ indicates that new IdPs should use a URL style entityID. However, it also suggests that existing URN style entityID should not be migrated. It points out, “Changing an entityID may cause service disruption and require changes at many partner SP sites.  It is usually more important for entityIDs to remain stable.” 

      We should strongly consider ignoring this recommendation and migrating to the URL form of entityID within the InCommon metadata. 


      b.      We’re thinking about adding the new IdPs to the existing idp.mit.edu DNS round robin. It should be understood that there will be no state sharing between the 1.3 IdPs and the 2.x IdPs. To a certain extent this will affect SSO. For users that have configured their browsers to always use a certificate, or always use Kerberos, there should be no visible change in behavior while the 1.3 and 2.x servers are both running. 


      For users that don’t have the mechanism automatically selected, but always click on certificates, or Kerberos, they may be presented with the login screen twice, during a browser session.

       


      The same is true for people that use username and password. They should only end up being prompted for their username and password one extra time during a typical browser session. Many users will not see a change in behavior. 


      Once we have confidence in the new IdPs, the 1.3 IdPs will be taken out of the DNS pool and taken out of service. How long the 1.3 and 2.x IdPs should be allowed to run concurrently is open to debate. Perhaps only an hour, perhaps a couple of days, I expect that we will have a better idea once we have done this in the staging environment first. . 


      Should the DNS TTL be lowered, during the time that both the 1.3 and 2.x IdPs are running concurrently? 


      Alternatives:

      1.      Bring up the new IdPs under a new DNS name, and add them to the MIT-metadata. As SPs take the new metadata, they will start using the new IdPs. 


      Note that this technique is not recommended by the Shib-user community or the Internet2 wiki. It can lead to a long transition time, and it is difficult to backtrack quickly if there are any problems. The best behaved SPs tend to only update their metadata once a day, many only update manually. 


      2.      We could “throw the switch” and shut down the old IdPs and bring up the new IdPs during one scheduled short down time. This would require a short interruption of service. If there are problems with the 2.x IdPs it will mean there will be other interruptions of service.

      ...

      Deploy to CAMS week of February 22