Showing posts with label node. Show all posts
Showing posts with label node. Show all posts

Tuesday, March 20, 2012

3 server active cluster

Does anyone have any thoughts on how to setup and configure a SQL cluster of
three ACTIVE servers. I'm familiar with a 2 node, active/passive and
active/active cluster but have not seen much published on a 3 or 4 node
active(n) cluster. Thanks! - Mike
Hi
No different to a 2 node. The big descision you have to make is onto which
machine an instance can fail over. If you were to allow enough resouces for
one machine to handle all 3 server's instances, you have a lot of unused
resources.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mike" wrote:

> Does anyone have any thoughts on how to setup and configure a SQL cluster of
> three ACTIVE servers. I'm familiar with a 2 node, active/passive and
> active/active cluster but have not seen much published on a 3 or 4 node
> active(n) cluster. Thanks! - Mike
|||I have another 3 server cluster. I want an active/passive and then a dev as a last resort for failover. The dev box is going to have either VMWare or Virtual Server because I am using this as a last resort for failover for Exchange, too.
You can call me crazy but will this work?
Kami

Quote:

Originally posted by Mike Epprecht (SQL MVP)
Hi
No different to a 2 node. The big descision you have to make is onto which
machine an instance can fail over. If you were to allow enough resouces for
one machine to handle all 3 server's instances, you have a lot of unused
resources.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mike" wrote:

> Does anyone have any thoughts on how to setup and configure a SQL cluster of
> three ACTIVE servers. I'm familiar with a 2 node, active/passive and
> active/active cluster but have not seen much published on a 3 or 4 node
> active(n) cluster. Thanks! - Mike

3 Node W2K3 SQL 2005 Cluster

All,
I've setup a Windows 2003 Enterprise Edition cluster of 3 nodes.
Everything is attached to a san (CX500).
Just installed SQL 2005 and all works fine.
Is it possible to have this cluster with 2 active nodes and 1 passive
node?
i've setup a drive (r:\) and it's online but only avaiable from 1
resource. the other nodes sees it but can't access it. I want to setup
a cluster where 2 nodes can access the drive (to share sql databases)
and 1 node access it when it failovers..
is this a good setup or not? should i use 2 or 4 nodes instead of 3?
Or is it possible to add a SQL cluster resources to the same storage?
Also is it possible to setup a load balanced sql 2005 cluster? where i
can have 3 active nodes load balanced and also failover when 1 node
goes down?
Hope someone can help me out..
Thanx in advance.

> Is it possible to have this cluster with 2 active nodes and 1 passive
> node?
Yes, but not in the way you want, you can create another SQL instance and
run this on the other node. Still this instance has its own databases and
its own disks.

> i've setup a drive (r:\) and it's online but only avaiable from 1
> resource. the other nodes sees it but can't access it. I want to setup
> a cluster where 2 nodes can access the drive (to share sql databases)
> and 1 node access it when it failovers..
No, at any given time only ONE node can access a disk ( the node where the
cluster resource group is online ). You may not access an NTFS volume from
more than one server at any given time. This is an NTFS limitation. Hence
clustering is called "shared nothing" cluster model.

> Also is it possible to setup a load balanced sql 2005 cluster? where i
> can have 3 active nodes load balanced and also failover when 1 node
> goes down?
Same as above, you can have multiple instances running, but each of these
instances have their own disks and databases.

> Hope someone can help me out..
> Thanx in advance.
>
Rgds,
Edwin.

3 node sql2000 on win2003 cookbook?

Can some point to a doc (cookbook) for building a 3 node sql2000 cluster on
win 2003.
thanks,
JR
You can take a look at the following link
http://support.microsoft.com/?id=260758. It deals with FAQ and also has
links on how to install SQL2000 on Win2003 Cluster. Your experience should
not vary on the number of nodes on the cluster.
Sandeep Sutari
Microsoft Corp.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"jr" <jr@.jr.com> wrote in message
news:%23iyGdVQOEHA.904@.TK2MSFTNGP12.phx.gbl...
> Can some point to a doc (cookbook) for building a 3 node sql2000 cluster
on
> win 2003.
> thanks,
> JR
>
|||Sandeep, can you point to similar information on the Active / active clustering, not failover? i seems to find a lot about fail over but having hard time locating faq or a how to on configuring it as multiple active nodes to share the processing load.
Thank you
-- Sandeep Sutari [MSFT] wrote: --
You can take a look at the following link
http://support.microsoft.com/?id=260758. It deals with FAQ and also has
links on how to install SQL2000 on Win2003 Cluster. Your experience should
not vary on the number of nodes on the cluster.
Sandeep Sutari
Microsoft Corp.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"jr" <jr@.jr.com> wrote in message
news:%23iyGdVQOEHA.904@.TK2MSFTNGP12.phx.gbl...
> Can some point to a doc (cookbook) for building a 3 node sql2000 cluster
on[vbcol=seagreen]
> win 2003.
> JR
|||I don't think active/active means what you think it does here. :-) That's a
term that was used in SQL Server 7.0 to indicate that both nodes in your
Failover Cluster were running a installation of SQL Server that was being
accessed by users. Those instances do not share databases between them, they
were completely stand-alone. SQL Server 2000 does not allow multiple nodes
to access the same database at the same time either.
Sincerely,
Stephen Dybing
This posting is provided "AS IS" with no warranties, and confers no rights.
"Michael" <anonymous@.discussions.microsoft.com> wrote in message
news:538CE54C-0C3B-48B9-AD24-3E23CF18B54D@.microsoft.com...
> Sandeep, can you point to similar information on the Active / active
clustering, not failover? i seems to find a lot about fail over but having
hard time locating faq or a how to on configuring it as multiple active
nodes to share the processing load.
> Thank you
> -- Sandeep Sutari [MSFT] wrote: --
> You can take a look at the following link
> http://support.microsoft.com/?id=260758. It deals with FAQ and also
has
> links on how to install SQL2000 on Win2003 Cluster. Your experience
should
> not vary on the number of nodes on the cluster.
>
> --
> Sandeep Sutari
> Microsoft Corp.
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> Use of any included script samples are subject to the terms specified
at[vbcol=seagreen]
> http://www.microsoft.com/info/cpyright.htm
> "jr" <jr@.jr.com> wrote in message
> news:%23iyGdVQOEHA.904@.TK2MSFTNGP12.phx.gbl...
cluster[vbcol=seagreen]
> on
sql

3 node SQL Server cluster - is it possible

Friends
I will have three SQL Server 2000 boxes at my location. Is it possible
for a SQL Server cluster with 3 nodes?
I will like SQL servers running on their own boxes but to failover to
other member nodes if a box fails..
Thanks
Your best bet would be a four node cluster with three SQL instances. That
way there is a "clean" node ready to assume any single failed instance. If
you don't have a free node, you run into some compromises on memory
allocation so you can "stack" instances.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"D Goyal" <goyald@.gmail.com> wrote in message
news:1134658318.825088.97770@.z14g2000cwz.googlegro ups.com...
> Friends
> I will have three SQL Server 2000 boxes at my location. Is it possible
> for a SQL Server cluster with 3 nodes?
> I will like SQL servers running on their own boxes but to failover to
> other member nodes if a box fails..
> Thanks
>

3 node SQL Server cluster - is it possible

Friends
I will have three SQL Server 2000 boxes at my location. Is it possible
for a SQL Server cluster with 3 nodes?
I will like SQL servers running on their own boxes but to failover to
other member nodes if a box fails..
ThanksYour best bet would be a four node cluster with three SQL instances. That
way there is a "clean" node ready to assume any single failed instance. If
you don't have a free node, you run into some compromises on memory
allocation so you can "stack" instances.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"D Goyal" <goyald@.gmail.com> wrote in message
news:1134658318.825088.97770@.z14g2000cwz.googlegroups.com...
> Friends
> I will have three SQL Server 2000 boxes at my location. Is it possible
> for a SQL Server cluster with 3 nodes?
> I will like SQL servers running on their own boxes but to failover to
> other member nodes if a box fails..
> Thanks
>

3 node SQL Server cluster - is it possible

Friends
I will have three SQL Server 2000 boxes at my location. Is it possible
for a SQL Server cluster with 3 nodes?
I will like SQL servers running on their own boxes but to failover to
other member nodes if a box fails..
ThanksYour best bet would be a four node cluster with three SQL instances. That
way there is a "clean" node ready to assume any single failed instance. If
you don't have a free node, you run into some compromises on memory
allocation so you can "stack" instances.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"D Goyal" <goyald@.gmail.com> wrote in message
news:1134658318.825088.97770@.z14g2000cwz.googlegroups.com...
> Friends
> I will have three SQL Server 2000 boxes at my location. Is it possible
> for a SQL Server cluster with 3 nodes?
> I will like SQL servers running on their own boxes but to failover to
> other member nodes if a box fails..
> Thanks
>

3 node Cluster Question

Is it possible to have a node that is running one default and one named
instance of SQL Server 2000, A second node that's running a default instance
of sql server 2000 and have both failover to a third node? Any thoughts,
suggestions, or answers would eb appreciated particular if your're currently
running this conifiguration.
You only get one default instance per cluster. You can use DNS to alias an
old server name to the new server name. Three node clusters are legal using
Windows Server 2003 Enterprise Edition and SQL Server 2000 Enterprise
Edition. You can control the failover path for each instance independantly.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Roger Newton" <RogerNewton@.discussions.microsoft.com> wrote in message
news:257A442B-6027-4C21-A29E-A51C05C65E8D@.microsoft.com...
> Is it possible to have a node that is running one default and one named
> instance of SQL Server 2000, A second node that's running a default
instance
> of sql server 2000 and have both failover to a third node? Any thoughts,
> suggestions, or answers would eb appreciated particular if your're
currently
> running this conifiguration.
|||Thanks for responding,
So it sounds like that if I'm running windows 2003, SQL server 2000
enterprise have 1 default instance and 1 named instance on node A and 1 named
instance on node B I will be able to failover all instances to a third node
(node C) for failover purposes. Sound right to you?
"Geoff N. Hiten" wrote:

> You only get one default instance per cluster. You can use DNS to alias an
> old server name to the new server name. Three node clusters are legal using
> Windows Server 2003 Enterprise Edition and SQL Server 2000 Enterprise
> Edition. You can control the failover path for each instance independantly.
> --
> Geoff N. Hiten
> Microsoft SQL Server MVP
> Senior Database Administrator
> Careerbuilder.com
> I support the Professional Association for SQL Server
> www.sqlpass.org
> "Roger Newton" <RogerNewton@.discussions.microsoft.com> wrote in message
> news:257A442B-6027-4C21-A29E-A51C05C65E8D@.microsoft.com...
> instance
> currently
>
>
|||You could look at it that way. Instances (default or named) don't really
have an owner node in SQL 2000. All nodes are equal, except during installs
or upgrades. You can set a preferred host order and limit which hosts are
allowed to run which instances, but you have to do that yourself after
setup. I personally run a N+1 cluster where I have three instances and four
nodes. My only gripe is I have to manually reset the preferred node order
if anybody fails over so I don't overcommit memory on a single node.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Roger Newton" <RogerNewton@.discussions.microsoft.com> wrote in message
news:C43C55D9-65E7-40FE-A4A5-4A3DC76EE9E3@.microsoft.com...
> Thanks for responding,
> So it sounds like that if I'm running windows 2003, SQL server 2000
> enterprise have 1 default instance and 1 named instance on node A and 1
named
> instance on node B I will be able to failover all instances to a third
node[vbcol=seagreen]
> (node C) for failover purposes. Sound right to you?
> "Geoff N. Hiten" wrote:
an[vbcol=seagreen]
using[vbcol=seagreen]
independantly.[vbcol=seagreen]
named[vbcol=seagreen]
thoughts,[vbcol=seagreen]
|||thanks for the help
"Geoff N. Hiten" wrote:

> You could look at it that way. Instances (default or named) don't really
> have an owner node in SQL 2000. All nodes are equal, except during installs
> or upgrades. You can set a preferred host order and limit which hosts are
> allowed to run which instances, but you have to do that yourself after
> setup. I personally run a N+1 cluster where I have three instances and four
> nodes. My only gripe is I have to manually reset the preferred node order
> if anybody fails over so I don't overcommit memory on a single node.
> --
> Geoff N. Hiten
> Microsoft SQL Server MVP
> Senior Database Administrator
> Careerbuilder.com
> I support the Professional Association for SQL Server
> www.sqlpass.org
> "Roger Newton" <RogerNewton@.discussions.microsoft.com> wrote in message
> news:C43C55D9-65E7-40FE-A4A5-4A3DC76EE9E3@.microsoft.com...
> named
> node
> an
> using
> independantly.
> named
> thoughts,
>
>

3 node cluster

Later this summer, I will need to establish a 3 node cluster using Windows
2003 and SQL Server 2K using Veritas Cluster Server software. I don't have
any experience setting up a 3 node cluster, only a 2 node cluster using
MSCS.
What are the primary differences between a 2 node MSCS cluster and a Veritas
3 node cluster?
I was having difficulty finding tech bulletins, etc. specifically using 3
node clusters on Veritas. Does anyone have any good links on this subject?
TIA
Alterego
MSCS Clustering and Veritas Clustering are two different technologies
independent of one another. If you have more then 3 nodes in either of these
solutions you will be using SAN for your shared storage. Through you SAN
software you are able to zone and create storage groups who have access to
our LUNS. Veritas Clustering supports the use of Dynamic Disk where MSCS
Clustering does not. One key thing to keep in mind is Microsofts support
policy for SQL failover clustering. Please read MS KB 327518.
Regards
CT
"Alterego" wrote:

> Later this summer, I will need to establish a 3 node cluster using Windows
> 2003 and SQL Server 2K using Veritas Cluster Server software. I don't have
> any experience setting up a 3 node cluster, only a 2 node cluster using
> MSCS.
> What are the primary differences between a 2 node MSCS cluster and a Veritas
> 3 node cluster?
> I was having difficulty finding tech bulletins, etc. specifically using 3
> node clusters on Veritas. Does anyone have any good links on this subject?
> TIA
>
>
sql

Monday, March 19, 2012

3 Active and 1 Passive Node Cluster in Windows 2003 Server with SQL 2000 SP4

We are looking at setting up the following cluster:
3 Active nodes:
Each server needs to be completely distinct with its own unique
database and its own virtual server and ip. These nodes do not share
the same storage with the other active nodes. Active nodes must only
process for their own db's on the server itself
1 Passive node:
Common backup node for all the active nodes. Cluster service must be
able to transfer the disk ownership from the failed active node to this
one.
Is this possible with Windows 2003 and SQL Server SP4? I'm a little
concerned due to the way you need to apply sp4 to a SQL cluster.
Any advice would be much appreciated
Bill
You are close but are missing some key concepts with SQL clustering. There
are two distinct components involved, nodes and instances. Nodes are the
physical machines that comprise the cluster. Instances are the virtual SQL
servers. A node can host zero or more instances. If you set up a four node
cluster, you can park an instance on each of the three nodes with the fourth
node designated as the first failover node for each instance. As far as the
cluster is concerned, all host nodes are equal so you can allocate the
instances amongst the nodes as you see fit. From the client perspective,
all interaction is with the virtual server (instance) so you don't care
where the instance is actually hosted.
As for storage, you need a storage mechanism that is accessable to all
nodes, typicaly a SAN. The cluster service arbitrates ownership between the
specific nodes so all the resources necessary for any single instance to
function are always together on a single host node. The instance IP address
is one of the unique resources within the virtual server resource group.
As with all service packs, SP4 is cluster-aware, so that if you run the
installer from the node currently hosting an instance, it upgrades the local
binaries on all nodes for that particular instance. You will have to run
the Service Pack separately for each instance, however it is perfectly fine
to run instances at different SP and hotfix levels within a cluster.
FYI, I have built and managed a four-node, three instance cluster like you
have described, up through SP3a + hotfix 9?. I haven't tried SP4 yet but I
don't see any particular problems except for the already resolved AWE issue.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
<bcrenshaw99@.yahoo.com> wrote in message
news:1124814061.497589.15020@.g49g2000cwa.googlegro ups.com...
> We are looking at setting up the following cluster:
> 3 Active nodes:
> Each server needs to be completely distinct with its own unique
> database and its own virtual server and ip. These nodes do not share
> the same storage with the other active nodes. Active nodes must only
> process for their own db's on the server itself
> 1 Passive node:
> Common backup node for all the active nodes. Cluster service must be
> able to transfer the disk ownership from the failed active node to this
> one.
>
> Is this possible with Windows 2003 and SQL Server SP4? I'm a little
> concerned due to the way you need to apply sp4 to a SQL cluster.
> Any advice would be much appreciated
> Bill
>
|||Sorry, I meant to ask if 3 separate clusters could share a common node?
Node: SQLSRV1
Cluster: SQLCLUST1
Instance: sqlapp1
Node: SQLSRV2
Cluster: SQLCLUST2
Instance: sqlapp2
Node: SQLSRV3
Cluster: SQLCLUST3
Instance: sqlapp3
Node: SQLSRV4
Cluster: SQLCLUST1, SQLCLUST2, SQLCLUST3
Instance: sqlapp1
Instance: sqlapp2
Instance: sqlapp3
|||No. A server can participate in only one cluster. A cluster can support up
to 16 SQL instances and 4 nodes under SQL 2000.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
<bcrenshaw99@.yahoo.com> wrote in message
news:1124831740.683769.184000@.g47g2000cwa.googlegr oups.com...
> Sorry, I meant to ask if 3 separate clusters could share a common node?
> Node: SQLSRV1
> Cluster: SQLCLUST1
> Instance: sqlapp1
>
> Node: SQLSRV2
> Cluster: SQLCLUST2
> Instance: sqlapp2
>
> Node: SQLSRV3
> Cluster: SQLCLUST3
> Instance: sqlapp3
>
> Node: SQLSRV4
> Cluster: SQLCLUST1, SQLCLUST2, SQLCLUST3
> Instance: sqlapp1
> Instance: sqlapp2
> Instance: sqlapp3
>

2nd Node does not come online

I have an "Active/Active" SQL 2000 cluster on Windows 2003 SP1. The first
node is the default instance and the second is a named instance. They are
both listenting on port 1433. When I fail the default instance over to the
2nd node, everything failsover and comes on line and I am able to connect and
query the data from both instances. I fail the node back over, again
everthing works.
When I fail the second node (named instance) over to the first node,
everthing fails over; however, the SQL Agent, Fulltext searching and the SQL
Server do not come on line. Any ideas on why this is happening? Thanks.
You can't have them both listening on port 1433. You can have one, say the
default, and the other DYNAMIC, which means SQL will randomly select a port.
When failed over it will attempt to acquire the same port as before, but not
always. The client rely on the Dynamic Discovery, UDP 1434, to detect which
port to use.
If you have these hard-coded, the named instance will fail whenever it fails
over to the node where the default instance has already acquired that port.
Consider it portetiquett, but no two TCP services can listen on the same
port on the same server. Sorry.
Sincerely,
Anthony Thomas

"MAGrimsley" <MAGrimsley@.discussions.microsoft.com> wrote in message
news:EC957BA9-76A8-42E4-AD8D-5F47354C149B@.microsoft.com...
> I have an "Active/Active" SQL 2000 cluster on Windows 2003 SP1. The first
> node is the default instance and the second is a named instance. They are
> both listenting on port 1433. When I fail the default instance over to the
> 2nd node, everything failsover and comes on line and I am able to connect
and
> query the data from both instances. I fail the node back over, again
> everthing works.
> When I fail the second node (named instance) over to the first node,
> everthing fails over; however, the SQL Agent, Fulltext searching and the
SQL
> Server do not come on line. Any ideas on why this is happening? Thanks.

Sunday, March 11, 2012

2nd Node Couldnt join existing cluster

Guys
I have a cluster running on single node and i am trying to add the 2nd node
but getting this error. Any idea? I am using external EMC storage for the
physical disks.
The only different i could see is, in node 1, the mapping of disks in disk
mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in node 1
while the 1G disk is mapped in disk in node 2. Cauld that be the reason? If
yes, Can i change the sequence? How?
TIA
************************************************** **********
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: The following nodes cannot not verify that they can host
the quorum resource...
[ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
quorum resource. Ensure that your hardware is properly configured and that
all nodes have access to a quorum-capable resource.
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[ERR ] CLUSTER001: A multi-node cluster cannot be created because there is
not a quorum-capable resource common to all nodes. In some complex storage
solutions, such as a fiber channel switched fabric, a particular storage unit
might have a different identity (Target ID and/or LUN) on each computer in
the cluster. Although this is a valid storage configuration, it violates the
storage validation heuristics in the Add Nodes Wizard when using the default
Typical (full) configuration option, resulting in an error.
************************************************** **********
As long as you are absolutely sure which disk resource is the Quorum
resource, you can override the verification wizard and designate the quorum
disk manually. I have had that problem in the past when there were many
disks presented to the host nodes. Once the node is added to the cluster,
the drives map correctly.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP.
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> Guys
> I have a cluster running on single node and i am trying to add the 2nd
> node
> but getting this error. Any idea? I am using external EMC storage for the
> physical disks.
> The only different i could see is, in node 1, the mapping of disks in disk
> mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in node
> 1
> while the 1G disk is mapped in disk in node 2. Cauld that be the reason?
> If
> yes, Can i change the sequence? How?
> TIA
> ************************************************** **********
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: The following nodes cannot not verify that they can
> host
> the quorum resource...
> [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
> quorum resource. Ensure that your hardware is properly configured and that
> all nodes have access to a quorum-capable resource.
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [ERR ] CLUSTER001: A multi-node cluster cannot be created because there is
> not a quorum-capable resource common to all nodes. In some complex storage
> solutions, such as a fiber channel switched fabric, a particular storage
> unit
> might have a different identity (Target ID and/or LUN) on each computer in
> the cluster. Although this is a valid storage configuration, it violates
> the
> storage validation heuristics in the Add Nodes Wizard when using the
> default
> Typical (full) configuration option, resulting in an error.
> ************************************************** **********
|||hi Geoff,
Sorry i was bit unlcear in explaining my current config in earlier msgs.
What i am trying to say was
in node 1, in disk mgmt, Q drive is mapped in disk 1
in node 2, in disk mgmt, Q drive is mapped to disk 4
Anyway,
In your reply, "override verification wizard" does it mean disabling the
heauristic scanning and join the node first? I have joined the cluster using
minimum config and skipping the verification process. Now, 2nd node is in
cluster and all the disks a hold by 1st node. I wanted to try doing failover
but my worries are it's in production and i afraid the incorrect mapping may
corrupt the data if i do failover. What you think?
"Geoff N. Hiten" wrote:

> As long as you are absolutely sure which disk resource is the Quorum
> resource, you can override the verification wizard and designate the quorum
> disk manually. I have had that problem in the past when there were many
> disks presented to the host nodes. Once the node is added to the cluster,
> the drives map correctly.
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP.
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>
>
|||You did exactly what I suggested. Worst case, SQL won't start due to
inability to find the files and it will fail back to the first node with
data intact. I doubt this will happen as the cluster services use disk
signatures to positively ID each disk.
I would wait until a scheduled maintenance window to try a failover, just so
you don't have to explain excessive downtime..
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Sorry i was bit unlcear in explaining my current config in earlier msgs.
> What i am trying to say was
> in node 1, in disk mgmt, Q drive is mapped in disk 1
> in node 2, in disk mgmt, Q drive is mapped to disk 4
> Anyway,
> In your reply, "override verification wizard" does it mean disabling the
> heauristic scanning and join the node first? I have joined the cluster
> using
> minimum config and skipping the verification process. Now, 2nd node is in
> cluster and all the disks a hold by 1st node. I wanted to try doing
> failover
> but my worries are it's in production and i afraid the incorrect mapping
> may
> corrupt the data if i do failover. What you think?
>
> "Geoff N. Hiten" wrote:
|||hi Geoff,
thank you..let me try when i get the downtime.if it still fails, what can i
do to correct the thing?thanks again
"Geoff N. Hiten" wrote:

> You did exactly what I suggested. Worst case, SQL won't start due to
> inability to find the files and it will fail back to the first node with
> data intact. I doubt this will happen as the cluster services use disk
> signatures to positively ID each disk.
> I would wait until a scheduled maintenance window to try a failover, just so
> you don't have to explain excessive downtime..
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>
>
|||If it won't fail over correctly, you will likely have to use Access Logix to
logically disconnect all the disks from the host computer at the EMC level.
You can then present them back to the host node in the correct order one at
a time.
This exact thing has happened to me before. I had over 10 clustered disks
and the wizard couldn't figure out which one was the Quorum. I manually
designated it and everything then matched right up.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> thank you..let me try when i get the downtime.if it still fails, what can
> i
> do to correct the thing?thanks again
> "Geoff N. Hiten" wrote:
|||hi Geoff,
Actually mine is SQL cluster. Right now 2nd node does not have the SQL
binaries. How do i get them installed? Today we had power trip and used that
opportunity to do failover. Q failover as expected but not the sql cluster
binaries because i just relaized the 2nd node doesnt have the binaries. If we
used wizard, it shd have done that(i mean rebuilding the sql) for us. But how
abt in this case? How to do manually? or any other way?
thanks a lot again
"Geoff N. Hiten" wrote:

> If it won't fail over correctly, you will likely have to use Access Logix to
> logically disconnect all the disks from the host computer at the EMC level.
> You can then present them back to the host node in the correct order one at
> a time.
> This exact thing has happened to me before. I had over 10 clustered disks
> and the wizard couldn't figure out which one was the Quorum. I manually
> designated it and everything then matched right up.
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
>
>
|||Check out "How to add nodes to an existing virtual server" in the BOL.
Also, once that is done, install the service pack on the *new* node, while
the SQL group is running on the *existing* node.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
|||The Cluster wizard only handles the cluster configuration. You still have
to add SQL. As Tom pointed out, there is a section in BOL on maintaining a
failover cluster. The topic How to Add Nodes to a Failover Cluster will
walk you through the basic install. You will need to reapply any service
packs and hotfixes to bring the binaries fully up to date.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
|||hi Tom,
is BOL=beginning of line? i cant see any link or reference or does BOL means
sth else...
"Tom Moreau" wrote:

> Check out "How to add nodes to an existing virtual server" in the BOL.
> Also, once that is done, install the service pack on the *new* node, while
> the SQL group is running on the *existing* node.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
>
>

2nd Node Couldnt join existing cluster

Guys
I have a cluster running on single node and i am trying to add the 2nd node
but getting this error. Any idea? I am using external EMC storage for the
physical disks.
The only different i could see is, in node 1, the mapping of disks in disk
mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in no
de 1
while the 1G disk is mapped in disk in node 2. Cauld that be the reason? If
yes, Can i change the sequence? How?
TIA
****************************************
********************
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: The following nodes cannot not verify that they can h
ost
the quorum resource...
[ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
quorum resource. Ensure that your hardware is properly configured and that
all nodes have access to a quorum-capable resource.
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[ERR ] CLUSTER001: A multi-node cluster cannot be created because there
is
not a quorum-capable resource common to all nodes. In some complex storage
solutions, such as a fiber channel switched fabric, a particular storage uni
t
might have a different identity (Target ID and/or LUN) on each computer in
the cluster. Although this is a valid storage configuration, it violates the
storage validation heuristics in the Add Nodes Wizard when using the default
Typical (full) configuration option, resulting in an error.
****************************************
********************As long as you are absolutely sure which disk resource is the Quorum
resource, you can override the verification wizard and designate the quorum
disk manually. I have had that problem in the past when there were many
disks presented to the host nodes. Once the node is added to the cluster,
the drives map correctly.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP.
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> Guys
> I have a cluster running on single node and i am trying to add the 2nd
> node
> but getting this error. Any idea? I am using external EMC storage for the
> physical disks.
> The only different i could see is, in node 1, the mapping of disks in disk
> mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in
node
> 1
> while the 1G disk is mapped in disk in node 2. Cauld that be the reason?
> If
> yes, Can i change the sequence? How?
> TIA
> ****************************************
********************
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: The following nodes cannot not verify that they can
> host
> the quorum resource...
> [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
> quorum resource. Ensure that your hardware is properly configured and that
> all nodes have access to a quorum-capable resource.
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [ERR ] CLUSTER001: A multi-node cluster cannot be created because ther
e is
> not a quorum-capable resource common to all nodes. In some complex storage
> solutions, such as a fiber channel switched fabric, a particular storage
> unit
> might have a different identity (Target ID and/or LUN) on each computer in
> the cluster. Although this is a valid storage configuration, it violates
> the
> storage validation heuristics in the Add Nodes Wizard when using the
> default
> Typical (full) configuration option, resulting in an error.
> ****************************************
********************|||hi Geoff,
Sorry i was bit unlcear in explaining my current config in earlier msgs.
What i am trying to say was
in node 1, in disk mgmt, Q drive is mapped in disk 1
in node 2, in disk mgmt, Q drive is mapped to disk 4
Anyway,
In your reply, "override verification wizard" does it mean disabling the
heauristic scanning and join the node first? I have joined the cluster using
minimum config and skipping the verification process. Now, 2nd node is in
cluster and all the disks a hold by 1st node. I wanted to try doing failover
but my worries are it's in production and i afraid the incorrect mapping may
corrupt the data if i do failover. What you think?
"Geoff N. Hiten" wrote:

> As long as you are absolutely sure which disk resource is the Quorum
> resource, you can override the verification wizard and designate the quoru
m
> disk manually. I have had that problem in the past when there were many
> disks presented to the host nodes. Once the node is added to the cluster,
> the drives map correctly.
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP.
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>
>|||You did exactly what I suggested. Worst case, SQL won't start due to
inability to find the files and it will fail back to the first node with
data intact. I doubt this will happen as the cluster services use disk
signatures to positively ID each disk.
I would wait until a scheduled maintenance window to try a failover, just so
you don't have to explain excessive downtime..
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Sorry i was bit unlcear in explaining my current config in earlier msgs.
> What i am trying to say was
> in node 1, in disk mgmt, Q drive is mapped in disk 1
> in node 2, in disk mgmt, Q drive is mapped to disk 4
> Anyway,
> In your reply, "override verification wizard" does it mean disabling the
> heauristic scanning and join the node first? I have joined the cluster
> using
> minimum config and skipping the verification process. Now, 2nd node is in
> cluster and all the disks a hold by 1st node. I wanted to try doing
> failover
> but my worries are it's in production and i afraid the incorrect mapping
> may
> corrupt the data if i do failover. What you think?
>
> "Geoff N. Hiten" wrote:
>|||hi Geoff,
thank you..let me try when i get the downtime.if it still fails, what can i
do to correct the thing?thanks again
"Geoff N. Hiten" wrote:

> You did exactly what I suggested. Worst case, SQL won't start due to
> inability to find the files and it will fail back to the first node with
> data intact. I doubt this will happen as the cluster services use disk
> signatures to positively ID each disk.
> I would wait until a scheduled maintenance window to try a failover, just
so
> you don't have to explain excessive downtime..
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>
>|||If it won't fail over correctly, you will likely have to use Access Logix to
logically disconnect all the disks from the host computer at the EMC level.
You can then present them back to the host node in the correct order one at
a time.
This exact thing has happened to me before. I had over 10 clustered disks
and the wizard couldn't figure out which one was the Quorum. I manually
designated it and everything then matched right up.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> thank you..let me try when i get the downtime.if it still fails, what can
> i
> do to correct the thing?thanks again
> "Geoff N. Hiten" wrote:
>|||hi Geoff,
Actually mine is SQL cluster. Right now 2nd node does not have the SQL
binaries. How do i get them installed? Today we had power trip and used that
opportunity to do failover. Q failover as expected but not the sql cluster
binaries because i just relaized the 2nd node doesnt have the binaries. If w
e
used wizard, it shd have done that(i mean rebuilding the sql) for us. But ho
w
abt in this case? How to do manually? or any other way?
thanks a lot again
"Geoff N. Hiten" wrote:

> If it won't fail over correctly, you will likely have to use Access Logix
to
> logically disconnect all the disks from the host computer at the EMC level
.
> You can then present them back to the host node in the correct order one a
t
> a time.
> This exact thing has happened to me before. I had over 10 clustered disks
> and the wizard couldn't figure out which one was the Quorum. I manually
> designated it and everything then matched right up.
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
>
>|||Check out "How to add nodes to an existing virtual server" in the BOL.
Also, once that is done, install the service pack on the *new* node, while
the SQL group is running on the *existing* node.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
>|||The Cluster wizard only handles the cluster configuration. You still have
to add SQL. As Tom pointed out, there is a section in BOL on maintaining a
failover cluster. The topic How to Add Nodes to a Failover Cluster will
walk you through the basic install. You will need to reapply any service
packs and hotfixes to bring the binaries fully up to date.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...[vbcol=seagreen]
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
>|||hi Tom,
is BOL=beginning of line? i cant see any link or reference or does BOL means
sth else...
"Tom Moreau" wrote:

> Check out "How to add nodes to an existing virtual server" in the BOL.
> Also, once that is done, install the service pack on the *new* node, while
> the SQL group is running on the *existing* node.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
>
>

2nd Node Couldnt join existing cluster

Guys
I have a cluster running on single node and i am trying to add the 2nd node
but getting this error. Any idea? I am using external EMC storage for the
physical disks.
The only different i could see is, in node 1, the mapping of disks in disk
mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in node 1
while the 1G disk is mapped in disk in node 2. Cauld that be the reason? If
yes, Can i change the sequence? How?
TIA
************************************************************
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[INFO] CLUSTER001: The following nodes cannot not verify that they can host
the quorum resource...
[ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
quorum resource. Ensure that your hardware is properly configured and that
all nodes have access to a quorum-capable resource.
[INFO] CLUSTER001: Checking that all nodes have access to the quorum
resource...
[ERR ] CLUSTER001: A multi-node cluster cannot be created because there is
not a quorum-capable resource common to all nodes. In some complex storage
solutions, such as a fiber channel switched fabric, a particular storage unit
might have a different identity (Target ID and/or LUN) on each computer in
the cluster. Although this is a valid storage configuration, it violates the
storage validation heuristics in the Add Nodes Wizard when using the default
Typical (full) configuration option, resulting in an error.
************************************************************As long as you are absolutely sure which disk resource is the Quorum
resource, you can override the verification wizard and designate the quorum
disk manually. I have had that problem in the past when there were many
disks presented to the host nodes. Once the node is added to the cluster,
the drives map correctly.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP.
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> Guys
> I have a cluster running on single node and i am trying to add the 2nd
> node
> but getting this error. Any idea? I am using external EMC storage for the
> physical disks.
> The only different i could see is, in node 1, the mapping of disks in disk
> mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in node
> 1
> while the 1G disk is mapped in disk in node 2. Cauld that be the reason?
> If
> yes, Can i change the sequence? How?
> TIA
> ************************************************************
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [INFO] CLUSTER001: The following nodes cannot not verify that they can
> host
> the quorum resource...
> [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
> quorum resource. Ensure that your hardware is properly configured and that
> all nodes have access to a quorum-capable resource.
> [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> resource...
> [ERR ] CLUSTER001: A multi-node cluster cannot be created because there is
> not a quorum-capable resource common to all nodes. In some complex storage
> solutions, such as a fiber channel switched fabric, a particular storage
> unit
> might have a different identity (Target ID and/or LUN) on each computer in
> the cluster. Although this is a valid storage configuration, it violates
> the
> storage validation heuristics in the Add Nodes Wizard when using the
> default
> Typical (full) configuration option, resulting in an error.
> ************************************************************|||hi Geoff,
Sorry i was bit unlcear in explaining my current config in earlier msgs.
What i am trying to say was
in node 1, in disk mgmt, Q drive is mapped in disk 1
in node 2, in disk mgmt, Q drive is mapped to disk 4
Anyway,
In your reply, "override verification wizard" does it mean disabling the
heauristic scanning and join the node first? I have joined the cluster using
minimum config and skipping the verification process. Now, 2nd node is in
cluster and all the disks a hold by 1st node. I wanted to try doing failover
but my worries are it's in production and i afraid the incorrect mapping may
corrupt the data if i do failover. What you think?
"Geoff N. Hiten" wrote:
> As long as you are absolutely sure which disk resource is the Quorum
> resource, you can override the verification wizard and designate the quorum
> disk manually. I have had that problem in the past when there were many
> disks presented to the host nodes. Once the node is added to the cluster,
> the drives map correctly.
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP.
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> > Guys
> > I have a cluster running on single node and i am trying to add the 2nd
> > node
> > but getting this error. Any idea? I am using external EMC storage for the
> > physical disks.
> > The only different i could see is, in node 1, the mapping of disks in disk
> > mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in node
> > 1
> > while the 1G disk is mapped in disk in node 2. Cauld that be the reason?
> > If
> > yes, Can i change the sequence? How?
> > TIA
> >
> > ************************************************************
> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> > resource...
> >
> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> > resource...
> >
> > [INFO] CLUSTER001: The following nodes cannot not verify that they can
> > host
> > the quorum resource...
> >
> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
> > quorum resource. Ensure that your hardware is properly configured and that
> > all nodes have access to a quorum-capable resource.
> >
> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> > resource...
> >
> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because there is
> > not a quorum-capable resource common to all nodes. In some complex storage
> > solutions, such as a fiber channel switched fabric, a particular storage
> > unit
> > might have a different identity (Target ID and/or LUN) on each computer in
> > the cluster. Although this is a valid storage configuration, it violates
> > the
> > storage validation heuristics in the Add Nodes Wizard when using the
> > default
> > Typical (full) configuration option, resulting in an error.
> > ************************************************************
>
>|||You did exactly what I suggested. Worst case, SQL won't start due to
inability to find the files and it will fail back to the first node with
data intact. I doubt this will happen as the cluster services use disk
signatures to positively ID each disk.
I would wait until a scheduled maintenance window to try a failover, just so
you don't have to explain excessive downtime..
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
> hi Geoff,
> Sorry i was bit unlcear in explaining my current config in earlier msgs.
> What i am trying to say was
> in node 1, in disk mgmt, Q drive is mapped in disk 1
> in node 2, in disk mgmt, Q drive is mapped to disk 4
> Anyway,
> In your reply, "override verification wizard" does it mean disabling the
> heauristic scanning and join the node first? I have joined the cluster
> using
> minimum config and skipping the verification process. Now, 2nd node is in
> cluster and all the disks a hold by 1st node. I wanted to try doing
> failover
> but my worries are it's in production and i afraid the incorrect mapping
> may
> corrupt the data if i do failover. What you think?
>
> "Geoff N. Hiten" wrote:
>> As long as you are absolutely sure which disk resource is the Quorum
>> resource, you can override the verification wizard and designate the
>> quorum
>> disk manually. I have had that problem in the past when there were many
>> disks presented to the host nodes. Once the node is added to the
>> cluster,
>> the drives map correctly.
>> Geoff N. Hiten
>> Senior Database Administrator
>> Microsoft SQL Server MVP.
>> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>> > Guys
>> > I have a cluster running on single node and i am trying to add the 2nd
>> > node
>> > but getting this error. Any idea? I am using external EMC storage for
>> > the
>> > physical disks.
>> > The only different i could see is, in node 1, the mapping of disks in
>> > disk
>> > mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in
>> > node
>> > 1
>> > while the 1G disk is mapped in disk in node 2. Cauld that be the
>> > reason?
>> > If
>> > yes, Can i change the sequence? How?
>> > TIA
>> >
>> > ************************************************************
>> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> > resource...
>> >
>> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> > resource...
>> >
>> > [INFO] CLUSTER001: The following nodes cannot not verify that they can
>> > host
>> > the quorum resource...
>> >
>> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
>> > quorum resource. Ensure that your hardware is properly configured and
>> > that
>> > all nodes have access to a quorum-capable resource.
>> >
>> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> > resource...
>> >
>> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because there
>> > is
>> > not a quorum-capable resource common to all nodes. In some complex
>> > storage
>> > solutions, such as a fiber channel switched fabric, a particular
>> > storage
>> > unit
>> > might have a different identity (Target ID and/or LUN) on each computer
>> > in
>> > the cluster. Although this is a valid storage configuration, it
>> > violates
>> > the
>> > storage validation heuristics in the Add Nodes Wizard when using the
>> > default
>> > Typical (full) configuration option, resulting in an error.
>> > ************************************************************
>>|||hi Geoff,
thank you..let me try when i get the downtime.if it still fails, what can i
do to correct the thing?thanks again
"Geoff N. Hiten" wrote:
> You did exactly what I suggested. Worst case, SQL won't start due to
> inability to find the files and it will fail back to the first node with
> data intact. I doubt this will happen as the cluster services use disk
> signatures to positively ID each disk.
> I would wait until a scheduled maintenance window to try a failover, just so
> you don't have to explain excessive downtime..
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
> > hi Geoff,
> > Sorry i was bit unlcear in explaining my current config in earlier msgs.
> > What i am trying to say was
> > in node 1, in disk mgmt, Q drive is mapped in disk 1
> > in node 2, in disk mgmt, Q drive is mapped to disk 4
> >
> > Anyway,
> > In your reply, "override verification wizard" does it mean disabling the
> > heauristic scanning and join the node first? I have joined the cluster
> > using
> > minimum config and skipping the verification process. Now, 2nd node is in
> > cluster and all the disks a hold by 1st node. I wanted to try doing
> > failover
> > but my worries are it's in production and i afraid the incorrect mapping
> > may
> > corrupt the data if i do failover. What you think?
> >
> >
> > "Geoff N. Hiten" wrote:
> >
> >> As long as you are absolutely sure which disk resource is the Quorum
> >> resource, you can override the verification wizard and designate the
> >> quorum
> >> disk manually. I have had that problem in the past when there were many
> >> disks presented to the host nodes. Once the node is added to the
> >> cluster,
> >> the drives map correctly.
> >>
> >> Geoff N. Hiten
> >> Senior Database Administrator
> >> Microsoft SQL Server MVP.
> >>
> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> >> > Guys
> >> > I have a cluster running on single node and i am trying to add the 2nd
> >> > node
> >> > but getting this error. Any idea? I am using external EMC storage for
> >> > the
> >> > physical disks.
> >> > The only different i could see is, in node 1, the mapping of disks in
> >> > disk
> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk 1 in
> >> > node
> >> > 1
> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
> >> > reason?
> >> > If
> >> > yes, Can i change the sequence? How?
> >> > TIA
> >> >
> >> > ************************************************************
> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> > resource...
> >> >
> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> > resource...
> >> >
> >> > [INFO] CLUSTER001: The following nodes cannot not verify that they can
> >> > host
> >> > the quorum resource...
> >> >
> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host the
> >> > quorum resource. Ensure that your hardware is properly configured and
> >> > that
> >> > all nodes have access to a quorum-capable resource.
> >> >
> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> > resource...
> >> >
> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because there
> >> > is
> >> > not a quorum-capable resource common to all nodes. In some complex
> >> > storage
> >> > solutions, such as a fiber channel switched fabric, a particular
> >> > storage
> >> > unit
> >> > might have a different identity (Target ID and/or LUN) on each computer
> >> > in
> >> > the cluster. Although this is a valid storage configuration, it
> >> > violates
> >> > the
> >> > storage validation heuristics in the Add Nodes Wizard when using the
> >> > default
> >> > Typical (full) configuration option, resulting in an error.
> >> > ************************************************************
> >>
> >>
> >>
>
>|||If it won't fail over correctly, you will likely have to use Access Logix to
logically disconnect all the disks from the host computer at the EMC level.
You can then present them back to the host node in the correct order one at
a time.
This exact thing has happened to me before. I had over 10 clustered disks
and the wizard couldn't figure out which one was the Quorum. I manually
designated it and everything then matched right up.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
> hi Geoff,
> thank you..let me try when i get the downtime.if it still fails, what can
> i
> do to correct the thing?thanks again
> "Geoff N. Hiten" wrote:
>> You did exactly what I suggested. Worst case, SQL won't start due to
>> inability to find the files and it will fail back to the first node with
>> data intact. I doubt this will happen as the cluster services use disk
>> signatures to positively ID each disk.
>> I would wait until a scheduled maintenance window to try a failover, just
>> so
>> you don't have to explain excessive downtime..
>> --
>> Geoff N. Hiten
>> Senior Database Administrator
>> Microsoft SQL Server MVP
>> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>> > hi Geoff,
>> > Sorry i was bit unlcear in explaining my current config in earlier
>> > msgs.
>> > What i am trying to say was
>> > in node 1, in disk mgmt, Q drive is mapped in disk 1
>> > in node 2, in disk mgmt, Q drive is mapped to disk 4
>> >
>> > Anyway,
>> > In your reply, "override verification wizard" does it mean disabling
>> > the
>> > heauristic scanning and join the node first? I have joined the cluster
>> > using
>> > minimum config and skipping the verification process. Now, 2nd node is
>> > in
>> > cluster and all the disks a hold by 1st node. I wanted to try doing
>> > failover
>> > but my worries are it's in production and i afraid the incorrect
>> > mapping
>> > may
>> > corrupt the data if i do failover. What you think?
>> >
>> >
>> > "Geoff N. Hiten" wrote:
>> >
>> >> As long as you are absolutely sure which disk resource is the Quorum
>> >> resource, you can override the verification wizard and designate the
>> >> quorum
>> >> disk manually. I have had that problem in the past when there were
>> >> many
>> >> disks presented to the host nodes. Once the node is added to the
>> >> cluster,
>> >> the drives map correctly.
>> >>
>> >> Geoff N. Hiten
>> >> Senior Database Administrator
>> >> Microsoft SQL Server MVP.
>> >>
>> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>> >> > Guys
>> >> > I have a cluster running on single node and i am trying to add the
>> >> > 2nd
>> >> > node
>> >> > but getting this error. Any idea? I am using external EMC storage
>> >> > for
>> >> > the
>> >> > physical disks.
>> >> > The only different i could see is, in node 1, the mapping of disks
>> >> > in
>> >> > disk
>> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk 1
>> >> > in
>> >> > node
>> >> > 1
>> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
>> >> > reason?
>> >> > If
>> >> > yes, Can i change the sequence? How?
>> >> > TIA
>> >> >
>> >> > ************************************************************
>> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> >> > resource...
>> >> >
>> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> >> > resource...
>> >> >
>> >> > [INFO] CLUSTER001: The following nodes cannot not verify that they
>> >> > can
>> >> > host
>> >> > the quorum resource...
>> >> >
>> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host
>> >> > the
>> >> > quorum resource. Ensure that your hardware is properly configured
>> >> > and
>> >> > that
>> >> > all nodes have access to a quorum-capable resource.
>> >> >
>> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
>> >> > resource...
>> >> >
>> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
>> >> > there
>> >> > is
>> >> > not a quorum-capable resource common to all nodes. In some complex
>> >> > storage
>> >> > solutions, such as a fiber channel switched fabric, a particular
>> >> > storage
>> >> > unit
>> >> > might have a different identity (Target ID and/or LUN) on each
>> >> > computer
>> >> > in
>> >> > the cluster. Although this is a valid storage configuration, it
>> >> > violates
>> >> > the
>> >> > storage validation heuristics in the Add Nodes Wizard when using the
>> >> > default
>> >> > Typical (full) configuration option, resulting in an error.
>> >> > ************************************************************
>> >>
>> >>
>> >>
>>|||hi Geoff,
Actually mine is SQL cluster. Right now 2nd node does not have the SQL
binaries. How do i get them installed? Today we had power trip and used that
opportunity to do failover. Q failover as expected but not the sql cluster
binaries because i just relaized the 2nd node doesnt have the binaries. If we
used wizard, it shd have done that(i mean rebuilding the sql) for us. But how
abt in this case? How to do manually? or any other way?
thanks a lot again
"Geoff N. Hiten" wrote:
> If it won't fail over correctly, you will likely have to use Access Logix to
> logically disconnect all the disks from the host computer at the EMC level.
> You can then present them back to the host node in the correct order one at
> a time.
> This exact thing has happened to me before. I had over 10 clustered disks
> and the wizard couldn't figure out which one was the Quorum. I manually
> designated it and everything then matched right up.
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
> > hi Geoff,
> > thank you..let me try when i get the downtime.if it still fails, what can
> > i
> > do to correct the thing?thanks again
> >
> > "Geoff N. Hiten" wrote:
> >
> >> You did exactly what I suggested. Worst case, SQL won't start due to
> >> inability to find the files and it will fail back to the first node with
> >> data intact. I doubt this will happen as the cluster services use disk
> >> signatures to positively ID each disk.
> >>
> >> I would wait until a scheduled maintenance window to try a failover, just
> >> so
> >> you don't have to explain excessive downtime..
> >>
> >> --
> >> Geoff N. Hiten
> >> Senior Database Administrator
> >> Microsoft SQL Server MVP
> >>
> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
> >> > hi Geoff,
> >> > Sorry i was bit unlcear in explaining my current config in earlier
> >> > msgs.
> >> > What i am trying to say was
> >> > in node 1, in disk mgmt, Q drive is mapped in disk 1
> >> > in node 2, in disk mgmt, Q drive is mapped to disk 4
> >> >
> >> > Anyway,
> >> > In your reply, "override verification wizard" does it mean disabling
> >> > the
> >> > heauristic scanning and join the node first? I have joined the cluster
> >> > using
> >> > minimum config and skipping the verification process. Now, 2nd node is
> >> > in
> >> > cluster and all the disks a hold by 1st node. I wanted to try doing
> >> > failover
> >> > but my worries are it's in production and i afraid the incorrect
> >> > mapping
> >> > may
> >> > corrupt the data if i do failover. What you think?
> >> >
> >> >
> >> > "Geoff N. Hiten" wrote:
> >> >
> >> >> As long as you are absolutely sure which disk resource is the Quorum
> >> >> resource, you can override the verification wizard and designate the
> >> >> quorum
> >> >> disk manually. I have had that problem in the past when there were
> >> >> many
> >> >> disks presented to the host nodes. Once the node is added to the
> >> >> cluster,
> >> >> the drives map correctly.
> >> >>
> >> >> Geoff N. Hiten
> >> >> Senior Database Administrator
> >> >> Microsoft SQL Server MVP.
> >> >>
> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> >> >> > Guys
> >> >> > I have a cluster running on single node and i am trying to add the
> >> >> > 2nd
> >> >> > node
> >> >> > but getting this error. Any idea? I am using external EMC storage
> >> >> > for
> >> >> > the
> >> >> > physical disks.
> >> >> > The only different i could see is, in node 1, the mapping of disks
> >> >> > in
> >> >> > disk
> >> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk 1
> >> >> > in
> >> >> > node
> >> >> > 1
> >> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
> >> >> > reason?
> >> >> > If
> >> >> > yes, Can i change the sequence? How?
> >> >> > TIA
> >> >> >
> >> >> > ************************************************************
> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> >> > resource...
> >> >> >
> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> >> > resource...
> >> >> >
> >> >> > [INFO] CLUSTER001: The following nodes cannot not verify that they
> >> >> > can
> >> >> > host
> >> >> > the quorum resource...
> >> >> >
> >> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can host
> >> >> > the
> >> >> > quorum resource. Ensure that your hardware is properly configured
> >> >> > and
> >> >> > that
> >> >> > all nodes have access to a quorum-capable resource.
> >> >> >
> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the quorum
> >> >> > resource...
> >> >> >
> >> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
> >> >> > there
> >> >> > is
> >> >> > not a quorum-capable resource common to all nodes. In some complex
> >> >> > storage
> >> >> > solutions, such as a fiber channel switched fabric, a particular
> >> >> > storage
> >> >> > unit
> >> >> > might have a different identity (Target ID and/or LUN) on each
> >> >> > computer
> >> >> > in
> >> >> > the cluster. Although this is a valid storage configuration, it
> >> >> > violates
> >> >> > the
> >> >> > storage validation heuristics in the Add Nodes Wizard when using the
> >> >> > default
> >> >> > Typical (full) configuration option, resulting in an error.
> >> >> > ************************************************************
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||Check out "How to add nodes to an existing virtual server" in the BOL.
Also, once that is done, install the service pack on the *new* node, while
the SQL group is running on the *existing* node.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
>> If it won't fail over correctly, you will likely have to use Access Logix
>> to
>> logically disconnect all the disks from the host computer at the EMC
>> level.
>> You can then present them back to the host node in the correct order one
>> at
>> a time.
>> This exact thing has happened to me before. I had over 10 clustered
>> disks
>> and the wizard couldn't figure out which one was the Quorum. I manually
>> designated it and everything then matched right up.
>> --
>> Geoff N. Hiten
>> Senior Database Administrator
>> Microsoft SQL Server MVP
>> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
>> > hi Geoff,
>> > thank you..let me try when i get the downtime.if it still fails, what
>> > can
>> > i
>> > do to correct the thing?thanks again
>> >
>> > "Geoff N. Hiten" wrote:
>> >
>> >> You did exactly what I suggested. Worst case, SQL won't start due to
>> >> inability to find the files and it will fail back to the first node
>> >> with
>> >> data intact. I doubt this will happen as the cluster services use disk
>> >> signatures to positively ID each disk.
>> >>
>> >> I would wait until a scheduled maintenance window to try a failover,
>> >> just
>> >> so
>> >> you don't have to explain excessive downtime..
>> >>
>> >> --
>> >> Geoff N. Hiten
>> >> Senior Database Administrator
>> >> Microsoft SQL Server MVP
>> >>
>> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>> >> > hi Geoff,
>> >> > Sorry i was bit unlcear in explaining my current config in earlier
>> >> > msgs.
>> >> > What i am trying to say was
>> >> > in node 1, in disk mgmt, Q drive is mapped in disk 1
>> >> > in node 2, in disk mgmt, Q drive is mapped to disk 4
>> >> >
>> >> > Anyway,
>> >> > In your reply, "override verification wizard" does it mean
>> >> > disabling
>> >> > the
>> >> > heauristic scanning and join the node first? I have joined the
>> >> > cluster
>> >> > using
>> >> > minimum config and skipping the verification process. Now, 2nd node
>> >> > is
>> >> > in
>> >> > cluster and all the disks a hold by 1st node. I wanted to try doing
>> >> > failover
>> >> > but my worries are it's in production and i afraid the incorrect
>> >> > mapping
>> >> > may
>> >> > corrupt the data if i do failover. What you think?
>> >> >
>> >> >
>> >> > "Geoff N. Hiten" wrote:
>> >> >
>> >> >> As long as you are absolutely sure which disk resource is the
>> >> >> Quorum
>> >> >> resource, you can override the verification wizard and designate
>> >> >> the
>> >> >> quorum
>> >> >> disk manually. I have had that problem in the past when there were
>> >> >> many
>> >> >> disks presented to the host nodes. Once the node is added to the
>> >> >> cluster,
>> >> >> the drives map correctly.
>> >> >>
>> >> >> Geoff N. Hiten
>> >> >> Senior Database Administrator
>> >> >> Microsoft SQL Server MVP.
>> >> >>
>> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>> >> >> > Guys
>> >> >> > I have a cluster running on single node and i am trying to add
>> >> >> > the
>> >> >> > 2nd
>> >> >> > node
>> >> >> > but getting this error. Any idea? I am using external EMC storage
>> >> >> > for
>> >> >> > the
>> >> >> > physical disks.
>> >> >> > The only different i could see is, in node 1, the mapping of
>> >> >> > disks
>> >> >> > in
>> >> >> > disk
>> >> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk
>> >> >> > 1
>> >> >> > in
>> >> >> > node
>> >> >> > 1
>> >> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
>> >> >> > reason?
>> >> >> > If
>> >> >> > yes, Can i change the sequence? How?
>> >> >> > TIA
>> >> >> >
>> >> >> > ************************************************************
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [INFO] CLUSTER001: The following nodes cannot not verify that
>> >> >> > they
>> >> >> > can
>> >> >> > host
>> >> >> > the quorum resource...
>> >> >> >
>> >> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can
>> >> >> > host
>> >> >> > the
>> >> >> > quorum resource. Ensure that your hardware is properly configured
>> >> >> > and
>> >> >> > that
>> >> >> > all nodes have access to a quorum-capable resource.
>> >> >> >
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
>> >> >> > there
>> >> >> > is
>> >> >> > not a quorum-capable resource common to all nodes. In some
>> >> >> > complex
>> >> >> > storage
>> >> >> > solutions, such as a fiber channel switched fabric, a particular
>> >> >> > storage
>> >> >> > unit
>> >> >> > might have a different identity (Target ID and/or LUN) on each
>> >> >> > computer
>> >> >> > in
>> >> >> > the cluster. Although this is a valid storage configuration, it
>> >> >> > violates
>> >> >> > the
>> >> >> > storage validation heuristics in the Add Nodes Wizard when using
>> >> >> > the
>> >> >> > default
>> >> >> > Typical (full) configuration option, resulting in an error.
>> >> >> > ************************************************************
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||The Cluster wizard only handles the cluster configuration. You still have
to add SQL. As Tom pointed out, there is a section in BOL on maintaining a
failover cluster. The topic How to Add Nodes to a Failover Cluster will
walk you through the basic install. You will need to reapply any service
packs and hotfixes to bring the binaries fully up to date.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
> hi Geoff,
> Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> binaries. How do i get them installed? Today we had power trip and used
> that
> opportunity to do failover. Q failover as expected but not the sql cluster
> binaries because i just relaized the 2nd node doesnt have the binaries. If
> we
> used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> how
> abt in this case? How to do manually? or any other way?
> thanks a lot again
> "Geoff N. Hiten" wrote:
>> If it won't fail over correctly, you will likely have to use Access Logix
>> to
>> logically disconnect all the disks from the host computer at the EMC
>> level.
>> You can then present them back to the host node in the correct order one
>> at
>> a time.
>> This exact thing has happened to me before. I had over 10 clustered
>> disks
>> and the wizard couldn't figure out which one was the Quorum. I manually
>> designated it and everything then matched right up.
>> --
>> Geoff N. Hiten
>> Senior Database Administrator
>> Microsoft SQL Server MVP
>> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
>> > hi Geoff,
>> > thank you..let me try when i get the downtime.if it still fails, what
>> > can
>> > i
>> > do to correct the thing?thanks again
>> >
>> > "Geoff N. Hiten" wrote:
>> >
>> >> You did exactly what I suggested. Worst case, SQL won't start due to
>> >> inability to find the files and it will fail back to the first node
>> >> with
>> >> data intact. I doubt this will happen as the cluster services use disk
>> >> signatures to positively ID each disk.
>> >>
>> >> I would wait until a scheduled maintenance window to try a failover,
>> >> just
>> >> so
>> >> you don't have to explain excessive downtime..
>> >>
>> >> --
>> >> Geoff N. Hiten
>> >> Senior Database Administrator
>> >> Microsoft SQL Server MVP
>> >>
>> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>> >> > hi Geoff,
>> >> > Sorry i was bit unlcear in explaining my current config in earlier
>> >> > msgs.
>> >> > What i am trying to say was
>> >> > in node 1, in disk mgmt, Q drive is mapped in disk 1
>> >> > in node 2, in disk mgmt, Q drive is mapped to disk 4
>> >> >
>> >> > Anyway,
>> >> > In your reply, "override verification wizard" does it mean
>> >> > disabling
>> >> > the
>> >> > heauristic scanning and join the node first? I have joined the
>> >> > cluster
>> >> > using
>> >> > minimum config and skipping the verification process. Now, 2nd node
>> >> > is
>> >> > in
>> >> > cluster and all the disks a hold by 1st node. I wanted to try doing
>> >> > failover
>> >> > but my worries are it's in production and i afraid the incorrect
>> >> > mapping
>> >> > may
>> >> > corrupt the data if i do failover. What you think?
>> >> >
>> >> >
>> >> > "Geoff N. Hiten" wrote:
>> >> >
>> >> >> As long as you are absolutely sure which disk resource is the
>> >> >> Quorum
>> >> >> resource, you can override the verification wizard and designate
>> >> >> the
>> >> >> quorum
>> >> >> disk manually. I have had that problem in the past when there were
>> >> >> many
>> >> >> disks presented to the host nodes. Once the node is added to the
>> >> >> cluster,
>> >> >> the drives map correctly.
>> >> >>
>> >> >> Geoff N. Hiten
>> >> >> Senior Database Administrator
>> >> >> Microsoft SQL Server MVP.
>> >> >>
>> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>> >> >> > Guys
>> >> >> > I have a cluster running on single node and i am trying to add
>> >> >> > the
>> >> >> > 2nd
>> >> >> > node
>> >> >> > but getting this error. Any idea? I am using external EMC storage
>> >> >> > for
>> >> >> > the
>> >> >> > physical disks.
>> >> >> > The only different i could see is, in node 1, the mapping of
>> >> >> > disks
>> >> >> > in
>> >> >> > disk
>> >> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk
>> >> >> > 1
>> >> >> > in
>> >> >> > node
>> >> >> > 1
>> >> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
>> >> >> > reason?
>> >> >> > If
>> >> >> > yes, Can i change the sequence? How?
>> >> >> > TIA
>> >> >> >
>> >> >> > ************************************************************
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [INFO] CLUSTER001: The following nodes cannot not verify that
>> >> >> > they
>> >> >> > can
>> >> >> > host
>> >> >> > the quorum resource...
>> >> >> >
>> >> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can
>> >> >> > host
>> >> >> > the
>> >> >> > quorum resource. Ensure that your hardware is properly configured
>> >> >> > and
>> >> >> > that
>> >> >> > all nodes have access to a quorum-capable resource.
>> >> >> >
>> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> > quorum
>> >> >> > resource...
>> >> >> >
>> >> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
>> >> >> > there
>> >> >> > is
>> >> >> > not a quorum-capable resource common to all nodes. In some
>> >> >> > complex
>> >> >> > storage
>> >> >> > solutions, such as a fiber channel switched fabric, a particular
>> >> >> > storage
>> >> >> > unit
>> >> >> > might have a different identity (Target ID and/or LUN) on each
>> >> >> > computer
>> >> >> > in
>> >> >> > the cluster. Although this is a valid storage configuration, it
>> >> >> > violates
>> >> >> > the
>> >> >> > storage validation heuristics in the Add Nodes Wizard when using
>> >> >> > the
>> >> >> > default
>> >> >> > Typical (full) configuration option, resulting in an error.
>> >> >> > ************************************************************
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||hi Tom,
is BOL=beginning of line? i cant see any link or reference or does BOL means
sth else...
"Tom Moreau" wrote:
> Check out "How to add nodes to an existing virtual server" in the BOL.
> Also, once that is done, install the service pack on the *new* node, while
> the SQL group is running on the *existing* node.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
> > hi Geoff,
> > Actually mine is SQL cluster. Right now 2nd node does not have the SQL
> > binaries. How do i get them installed? Today we had power trip and used
> > that
> > opportunity to do failover. Q failover as expected but not the sql cluster
> > binaries because i just relaized the 2nd node doesnt have the binaries. If
> > we
> > used wizard, it shd have done that(i mean rebuilding the sql) for us. But
> > how
> > abt in this case? How to do manually? or any other way?
> > thanks a lot again
> >
> > "Geoff N. Hiten" wrote:
> >
> >> If it won't fail over correctly, you will likely have to use Access Logix
> >> to
> >> logically disconnect all the disks from the host computer at the EMC
> >> level.
> >> You can then present them back to the host node in the correct order one
> >> at
> >> a time.
> >>
> >> This exact thing has happened to me before. I had over 10 clustered
> >> disks
> >> and the wizard couldn't figure out which one was the Quorum. I manually
> >> designated it and everything then matched right up.
> >>
> >> --
> >> Geoff N. Hiten
> >> Senior Database Administrator
> >> Microsoft SQL Server MVP
> >>
> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
> >> > hi Geoff,
> >> > thank you..let me try when i get the downtime.if it still fails, what
> >> > can
> >> > i
> >> > do to correct the thing?thanks again
> >> >
> >> > "Geoff N. Hiten" wrote:
> >> >
> >> >> You did exactly what I suggested. Worst case, SQL won't start due to
> >> >> inability to find the files and it will fail back to the first node
> >> >> with
> >> >> data intact. I doubt this will happen as the cluster services use disk
> >> >> signatures to positively ID each disk.
> >> >>
> >> >> I would wait until a scheduled maintenance window to try a failover,
> >> >> just
> >> >> so
> >> >> you don't have to explain excessive downtime..
> >> >>
> >> >> --
> >> >> Geoff N. Hiten
> >> >> Senior Database Administrator
> >> >> Microsoft SQL Server MVP
> >> >>
> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> >> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
> >> >> > hi Geoff,
> >> >> > Sorry i was bit unlcear in explaining my current config in earlier
> >> >> > msgs.
> >> >> > What i am trying to say was
> >> >> > in node 1, in disk mgmt, Q drive is mapped in disk 1
> >> >> > in node 2, in disk mgmt, Q drive is mapped to disk 4
> >> >> >
> >> >> > Anyway,
> >> >> > In your reply, "override verification wizard" does it mean
> >> >> > disabling
> >> >> > the
> >> >> > heauristic scanning and join the node first? I have joined the
> >> >> > cluster
> >> >> > using
> >> >> > minimum config and skipping the verification process. Now, 2nd node
> >> >> > is
> >> >> > in
> >> >> > cluster and all the disks a hold by 1st node. I wanted to try doing
> >> >> > failover
> >> >> > but my worries are it's in production and i afraid the incorrect
> >> >> > mapping
> >> >> > may
> >> >> > corrupt the data if i do failover. What you think?
> >> >> >
> >> >> >
> >> >> > "Geoff N. Hiten" wrote:
> >> >> >
> >> >> >> As long as you are absolutely sure which disk resource is the
> >> >> >> Quorum
> >> >> >> resource, you can override the verification wizard and designate
> >> >> >> the
> >> >> >> quorum
> >> >> >> disk manually. I have had that problem in the past when there were
> >> >> >> many
> >> >> >> disks presented to the host nodes. Once the node is added to the
> >> >> >> cluster,
> >> >> >> the drives map correctly.
> >> >> >>
> >> >> >> Geoff N. Hiten
> >> >> >> Senior Database Administrator
> >> >> >> Microsoft SQL Server MVP.
> >> >> >>
> >> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
> >> >> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
> >> >> >> > Guys
> >> >> >> > I have a cluster running on single node and i am trying to add
> >> >> >> > the
> >> >> >> > 2nd
> >> >> >> > node
> >> >> >> > but getting this error. Any idea? I am using external EMC storage
> >> >> >> > for
> >> >> >> > the
> >> >> >> > physical disks.
> >> >> >> > The only different i could see is, in node 1, the mapping of
> >> >> >> > disks
> >> >> >> > in
> >> >> >> > disk
> >> >> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk
> >> >> >> > 1
> >> >> >> > in
> >> >> >> > node
> >> >> >> > 1
> >> >> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
> >> >> >> > reason?
> >> >> >> > If
> >> >> >> > yes, Can i change the sequence? How?
> >> >> >> > TIA
> >> >> >> >
> >> >> >> > ************************************************************
> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
> >> >> >> > quorum
> >> >> >> > resource...
> >> >> >> >
> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
> >> >> >> > quorum
> >> >> >> > resource...
> >> >> >> >
> >> >> >> > [INFO] CLUSTER001: The following nodes cannot not verify that
> >> >> >> > they
> >> >> >> > can
> >> >> >> > host
> >> >> >> > the quorum resource...
> >> >> >> >
> >> >> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can
> >> >> >> > host
> >> >> >> > the
> >> >> >> > quorum resource. Ensure that your hardware is properly configured
> >> >> >> > and
> >> >> >> > that
> >> >> >> > all nodes have access to a quorum-capable resource.
> >> >> >> >
> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
> >> >> >> > quorum
> >> >> >> > resource...
> >> >> >> >
> >> >> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
> >> >> >> > there
> >> >> >> > is
> >> >> >> > not a quorum-capable resource common to all nodes. In some
> >> >> >> > complex
> >> >> >> > storage
> >> >> >> > solutions, such as a fiber channel switched fabric, a particular
> >> >> >> > storage
> >> >> >> > unit
> >> >> >> > might have a different identity (Target ID and/or LUN) on each
> >> >> >> > computer
> >> >> >> > in
> >> >> >> > the cluster. Although this is a valid storage configuration, it
> >> >> >> > violates
> >> >> >> > the
> >> >> >> > storage validation heuristics in the Add Nodes Wizard when using
> >> >> >> > the
> >> >> >> > default
> >> >> >> > Typical (full) configuration option, resulting in an error.
> >> >> >> > ************************************************************
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||BOL is SQL Server Books OnLine, the documentation that comes with SQL Server.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"rupart" <rupart@.discussions.microsoft.com> wrote in message
news:F26E7D76-71C2-4FF6-B913-B0D6EB852DCB@.microsoft.com...
> hi Tom,
> is BOL=beginning of line? i cant see any link or reference or does BOL means
> sth else...
> "Tom Moreau" wrote:
>> Check out "How to add nodes to an existing virtual server" in the BOL.
>> Also, once that is done, install the service pack on the *new* node, while
>> the SQL group is running on the *existing* node.
>> --
>> Tom
>> ----
>> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
>> SQL Server MVP
>> Columnist, SQL Server Professional
>> Toronto, ON Canada
>> www.pinpub.com
>> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> news:8FD9E670-66DC-4540-8829-8A438FF2AE5C@.microsoft.com...
>> > hi Geoff,
>> > Actually mine is SQL cluster. Right now 2nd node does not have the SQL
>> > binaries. How do i get them installed? Today we had power trip and used
>> > that
>> > opportunity to do failover. Q failover as expected but not the sql cluster
>> > binaries because i just relaized the 2nd node doesnt have the binaries. If
>> > we
>> > used wizard, it shd have done that(i mean rebuilding the sql) for us. But
>> > how
>> > abt in this case? How to do manually? or any other way?
>> > thanks a lot again
>> >
>> > "Geoff N. Hiten" wrote:
>> >
>> >> If it won't fail over correctly, you will likely have to use Access Logix
>> >> to
>> >> logically disconnect all the disks from the host computer at the EMC
>> >> level.
>> >> You can then present them back to the host node in the correct order one
>> >> at
>> >> a time.
>> >>
>> >> This exact thing has happened to me before. I had over 10 clustered
>> >> disks
>> >> and the wizard couldn't figure out which one was the Quorum. I manually
>> >> designated it and everything then matched right up.
>> >>
>> >> --
>> >> Geoff N. Hiten
>> >> Senior Database Administrator
>> >> Microsoft SQL Server MVP
>> >>
>> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> news:BDE8B45A-42B4-413E-9102-9C539EE5B8FF@.microsoft.com...
>> >> > hi Geoff,
>> >> > thank you..let me try when i get the downtime.if it still fails, what
>> >> > can
>> >> > i
>> >> > do to correct the thing?thanks again
>> >> >
>> >> > "Geoff N. Hiten" wrote:
>> >> >
>> >> >> You did exactly what I suggested. Worst case, SQL won't start due to
>> >> >> inability to find the files and it will fail back to the first node
>> >> >> with
>> >> >> data intact. I doubt this will happen as the cluster services use disk
>> >> >> signatures to positively ID each disk.
>> >> >>
>> >> >> I would wait until a scheduled maintenance window to try a failover,
>> >> >> just
>> >> >> so
>> >> >> you don't have to explain excessive downtime..
>> >> >>
>> >> >> --
>> >> >> Geoff N. Hiten
>> >> >> Senior Database Administrator
>> >> >> Microsoft SQL Server MVP
>> >> >>
>> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> >> news:92121BBB-5AC8-4B9C-9C32-A7650116F3B7@.microsoft.com...
>> >> >> > hi Geoff,
>> >> >> > Sorry i was bit unlcear in explaining my current config in earlier
>> >> >> > msgs.
>> >> >> > What i am trying to say was
>> >> >> > in node 1, in disk mgmt, Q drive is mapped in disk 1
>> >> >> > in node 2, in disk mgmt, Q drive is mapped to disk 4
>> >> >> >
>> >> >> > Anyway,
>> >> >> > In your reply, "override verification wizard" does it mean
>> >> >> > disabling
>> >> >> > the
>> >> >> > heauristic scanning and join the node first? I have joined the
>> >> >> > cluster
>> >> >> > using
>> >> >> > minimum config and skipping the verification process. Now, 2nd node
>> >> >> > is
>> >> >> > in
>> >> >> > cluster and all the disks a hold by 1st node. I wanted to try doing
>> >> >> > failover
>> >> >> > but my worries are it's in production and i afraid the incorrect
>> >> >> > mapping
>> >> >> > may
>> >> >> > corrupt the data if i do failover. What you think?
>> >> >> >
>> >> >> >
>> >> >> > "Geoff N. Hiten" wrote:
>> >> >> >
>> >> >> >> As long as you are absolutely sure which disk resource is the
>> >> >> >> Quorum
>> >> >> >> resource, you can override the verification wizard and designate
>> >> >> >> the
>> >> >> >> quorum
>> >> >> >> disk manually. I have had that problem in the past when there were
>> >> >> >> many
>> >> >> >> disks presented to the host nodes. Once the node is added to the
>> >> >> >> cluster,
>> >> >> >> the drives map correctly.
>> >> >> >>
>> >> >> >> Geoff N. Hiten
>> >> >> >> Senior Database Administrator
>> >> >> >> Microsoft SQL Server MVP.
>> >> >> >>
>> >> >> >> "rupart" <rupart@.discussions.microsoft.com> wrote in message
>> >> >> >> news:41E599C0-E0CC-4258-B992-7B11B8D49ADA@.microsoft.com...
>> >> >> >> > Guys
>> >> >> >> > I have a cluster running on single node and i am trying to add
>> >> >> >> > the
>> >> >> >> > 2nd
>> >> >> >> > node
>> >> >> >> > but getting this error. Any idea? I am using external EMC storage
>> >> >> >> > for
>> >> >> >> > the
>> >> >> >> > physical disks.
>> >> >> >> > The only different i could see is, in node 1, the mapping of
>> >> >> >> > disks
>> >> >> >> > in
>> >> >> >> > disk
>> >> >> >> > mgmt are different from node 2. Example (Q drive [1G] is in disk
>> >> >> >> > 1
>> >> >> >> > in
>> >> >> >> > node
>> >> >> >> > 1
>> >> >> >> > while the 1G disk is mapped in disk in node 2. Cauld that be the
>> >> >> >> > reason?
>> >> >> >> > If
>> >> >> >> > yes, Can i change the sequence? How?
>> >> >> >> > TIA
>> >> >> >> >
>> >> >> >> > ************************************************************
>> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> >> > quorum
>> >> >> >> > resource...
>> >> >> >> >
>> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> >> > quorum
>> >> >> >> > resource...
>> >> >> >> >
>> >> >> >> > [INFO] CLUSTER001: The following nodes cannot not verify that
>> >> >> >> > they
>> >> >> >> > can
>> >> >> >> > host
>> >> >> >> > the quorum resource...
>> >> >> >> >
>> >> >> >> > [ERR ] CLUSTER001: Could not verify that node "SERVER001" can
>> >> >> >> > host
>> >> >> >> > the
>> >> >> >> > quorum resource. Ensure that your hardware is properly configured
>> >> >> >> > and
>> >> >> >> > that
>> >> >> >> > all nodes have access to a quorum-capable resource.
>> >> >> >> >
>> >> >> >> > [INFO] CLUSTER001: Checking that all nodes have access to the
>> >> >> >> > quorum
>> >> >> >> > resource...
>> >> >> >> >
>> >> >> >> > [ERR ] CLUSTER001: A multi-node cluster cannot be created because
>> >> >> >> > there
>> >> >> >> > is
>> >> >> >> > not a quorum-capable resource common to all nodes. In some
>> >> >> >> > complex
>> >> >> >> > storage
>> >> >> >> > solutions, such as a fiber channel switched fabric, a particular
>> >> >> >> > storage
>> >> >> >> > unit
>> >> >> >> > might have a different identity (Target ID and/or LUN) on each
>> >> >> >> > computer
>> >> >> >> > in
>> >> >> >> > the cluster. Although this is a valid storage configuration, it
>> >> >> >> > violates
>> >> >> >> > the
>> >> >> >> > storage validation heuristics in the Add Nodes Wizard when using
>> >> >> >> > the
>> >> >> >> > default
>> >> >> >> > Typical (full) configuration option, resulting in an error.
>> >> >> >> > ************************************************************
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>